Test Report: Hyper-V_Windows 19199

                    
                      50cd99089b98d3ac0f2f64a84f76c9502bf70799:2024-07-09:35253
                    
                

Test fail (18/196)

x
+
TestAddons/parallel/Registry (72.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 19.0224ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xwvn2" [bc86917c-2fa7-44ed-8c10-fece12c6bff0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019963s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7gkck" [bab934ae-9fd8-4c82-a9c1-9060abb4bd5e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0258784s
addons_test.go:342: (dbg) Run:  kubectl --context addons-291800 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-291800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-291800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.1717044s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 ip: (2.823424s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0709 09:45:45.667334    8012 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-291800 ip"
2024/07/09 09:45:48 [DEBUG] GET http://172.18.206.170:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable registry --alsologtostderr -v=1: (16.0957839s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-291800 -n addons-291800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-291800 -n addons-291800: (13.1948795s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 logs -n 25: (10.8066107s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-955600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT |                     |
	|         | -p download-only-955600              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT | 09 Jul 24 09:37 PDT |
	| delete  | -p download-only-955600              | download-only-955600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT | 09 Jul 24 09:37 PDT |
	| start   | -o=json --download-only              | download-only-530500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT |                     |
	|         | -p download-only-530500              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT | 09 Jul 24 09:38 PDT |
	| delete  | -p download-only-530500              | download-only-530500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT | 09 Jul 24 09:38 PDT |
	| delete  | -p download-only-955600              | download-only-955600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT | 09 Jul 24 09:38 PDT |
	| delete  | -p download-only-530500              | download-only-530500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT | 09 Jul 24 09:38 PDT |
	| start   | --download-only -p                   | binary-mirror-991700 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT |                     |
	|         | binary-mirror-991700                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:51648               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-991700              | binary-mirror-991700 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT | 09 Jul 24 09:38 PDT |
	| addons  | enable dashboard -p                  | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT |                     |
	|         | addons-291800                        |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT |                     |
	|         | addons-291800                        |                      |                   |         |                     |                     |
	| start   | -p addons-291800 --wait=true         | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:38 PDT | 09 Jul 24 09:45 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:45 PDT | 09 Jul 24 09:45 PDT |
	|         | -p addons-291800                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:45 PDT | 09 Jul 24 09:45 PDT |
	|         | addons-291800                        |                      |                   |         |                     |                     |
	| addons  | addons-291800 addons disable         | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:45 PDT | 09 Jul 24 09:46 PDT |
	|         | helm-tiller --alsologtostderr        |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| ip      | addons-291800 ip                     | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:45 PDT | 09 Jul 24 09:45 PDT |
	| addons  | addons-291800 addons disable         | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:45 PDT | 09 Jul 24 09:46 PDT |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:46 PDT |                     |
	|         | -p addons-291800                     |                      |                   |         |                     |                     |
	| ssh     | addons-291800 ssh curl -s            | addons-291800        | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:46 PDT |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |                   |         |                     |                     |
	|         | nginx.example.com'                   |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 09:38:12
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 09:38:12.652759    7816 out.go:291] Setting OutFile to fd 856 ...
	I0709 09:38:12.653566    7816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 09:38:12.653566    7816 out.go:304] Setting ErrFile to fd 860...
	I0709 09:38:12.653566    7816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 09:38:12.676784    7816 out.go:298] Setting JSON to false
	I0709 09:38:12.679663    7816 start.go:129] hostinfo: {"hostname":"minikube1","uptime":1361,"bootTime":1720541731,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 09:38:12.679663    7816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 09:38:12.687052    7816 out.go:177] * [addons-291800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 09:38:12.691273    7816 notify.go:220] Checking for updates...
	I0709 09:38:12.691273    7816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 09:38:12.693999    7816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 09:38:12.696899    7816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 09:38:12.699250    7816 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 09:38:12.702155    7816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 09:38:12.704318    7816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 09:38:17.877618    7816 out.go:177] * Using the hyperv driver based on user configuration
	I0709 09:38:17.881610    7816 start.go:297] selected driver: hyperv
	I0709 09:38:17.881610    7816 start.go:901] validating driver "hyperv" against <nil>
	I0709 09:38:17.881610    7816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 09:38:17.927255    7816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 09:38:17.929212    7816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 09:38:17.929212    7816 cni.go:84] Creating CNI manager for ""
	I0709 09:38:17.929212    7816 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 09:38:17.929212    7816 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0709 09:38:17.929651    7816 start.go:340] cluster config:
	{Name:addons-291800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-291800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 09:38:17.929651    7816 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 09:38:17.934056    7816 out.go:177] * Starting "addons-291800" primary control-plane node in "addons-291800" cluster
	I0709 09:38:17.936910    7816 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 09:38:17.937175    7816 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 09:38:17.937214    7816 cache.go:56] Caching tarball of preloaded images
	I0709 09:38:17.937267    7816 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 09:38:17.937267    7816 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 09:38:17.938169    7816 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\config.json ...
	I0709 09:38:17.938612    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\config.json: {Name:mk19c15bbf7c0305d0ab71005dee0c217353bae8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:38:17.940033    7816 start.go:360] acquireMachinesLock for addons-291800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 09:38:17.940033    7816 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-291800"
	I0709 09:38:17.940033    7816 start.go:93] Provisioning new machine with config: &{Name:addons-291800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:addons-291800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 09:38:17.940571    7816 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 09:38:17.943508    7816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0709 09:38:17.943508    7816 start.go:159] libmachine.API.Create for "addons-291800" (driver="hyperv")
	I0709 09:38:17.943508    7816 client.go:168] LocalClient.Create starting
	I0709 09:38:17.944705    7816 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 09:38:18.564831    7816 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 09:38:18.806542    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 09:38:20.861159    7816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 09:38:20.861159    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:20.861159    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 09:38:22.503003    7816 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 09:38:22.510702    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:22.510702    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 09:38:23.942026    7816 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 09:38:23.955867    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:23.955867    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 09:38:27.469678    7816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 09:38:27.479532    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:27.482256    7816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 09:38:27.985321    7816 main.go:141] libmachine: Creating SSH key...
	I0709 09:38:28.323482    7816 main.go:141] libmachine: Creating VM...
	I0709 09:38:28.333511    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 09:38:31.022650    7816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 09:38:31.022650    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:31.024247    7816 main.go:141] libmachine: Using switch "Default Switch"
	I0709 09:38:31.024379    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 09:38:32.663733    7816 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 09:38:32.663993    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:32.663993    7816 main.go:141] libmachine: Creating VHD
	I0709 09:38:32.664082    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 09:38:36.362830    7816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F297A1B2-D311-4328-B145-4DC0A62D4F9C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 09:38:36.362884    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:36.362884    7816 main.go:141] libmachine: Writing magic tar header
	I0709 09:38:36.362884    7816 main.go:141] libmachine: Writing SSH key tar header
	I0709 09:38:36.372934    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 09:38:39.452665    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:38:39.462990    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:39.462990    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\disk.vhd' -SizeBytes 20000MB
	I0709 09:38:41.921222    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:38:41.921222    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:41.921222    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-291800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0709 09:38:45.876092    7816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-291800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 09:38:45.879408    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:45.879408    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-291800 -DynamicMemoryEnabled $false
	I0709 09:38:48.008622    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:38:48.008622    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:48.008622    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-291800 -Count 2
	I0709 09:38:50.025102    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:38:50.025102    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:50.034757    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-291800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\boot2docker.iso'
	I0709 09:38:52.466443    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:38:52.466659    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:52.466762    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-291800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\disk.vhd'
	I0709 09:38:55.018171    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:38:55.018171    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:55.018171    7816 main.go:141] libmachine: Starting VM...
	I0709 09:38:55.018408    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-291800
	I0709 09:38:58.062342    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:38:58.062342    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:38:58.062342    7816 main.go:141] libmachine: Waiting for host to start...
	I0709 09:38:58.062342    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:00.374980    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:00.374980    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:00.374980    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:02.947391    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:39:02.949345    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:03.949843    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:06.188479    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:06.195013    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:06.195129    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:08.697994    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:39:08.706722    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:09.720320    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:11.904887    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:11.904887    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:11.906220    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:14.420281    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:39:14.420281    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:15.438792    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:17.604802    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:17.609949    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:17.609949    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:20.096546    7816 main.go:141] libmachine: [stdout =====>] : 
	I0709 09:39:20.096546    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:21.109318    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:23.408558    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:23.408558    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:23.408717    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:25.930563    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:39:25.942029    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:25.942269    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:27.994394    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:28.005432    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:28.005546    7816 machine.go:94] provisionDockerMachine start ...
	I0709 09:39:28.005546    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:30.105263    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:30.105263    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:30.115489    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:32.505515    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:39:32.515845    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:32.521694    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:39:32.529196    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:39:32.529196    7816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 09:39:32.668255    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 09:39:32.668255    7816 buildroot.go:166] provisioning hostname "addons-291800"
	I0709 09:39:32.668255    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:34.652698    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:34.663040    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:34.663128    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:37.056287    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:39:37.056287    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:37.062342    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:39:37.062657    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:39:37.062657    7816 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-291800 && echo "addons-291800" | sudo tee /etc/hostname
	I0709 09:39:37.233972    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-291800
	
	I0709 09:39:37.234162    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:39.279384    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:39.279384    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:39.279661    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:41.665063    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:39:41.675074    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:41.680799    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:39:41.681434    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:39:41.681434    7816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-291800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-291800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-291800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 09:39:41.832596    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 09:39:41.832596    7816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 09:39:41.832596    7816 buildroot.go:174] setting up certificates
	I0709 09:39:41.832596    7816 provision.go:84] configureAuth start
	I0709 09:39:41.832596    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:43.883650    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:43.883715    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:43.883715    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:46.292107    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:39:46.301932    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:46.302052    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:48.366799    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:48.377639    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:48.377707    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:50.786437    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:39:50.796023    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:50.796159    7816 provision.go:143] copyHostCerts
	I0709 09:39:50.796684    7816 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 09:39:50.798457    7816 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 09:39:50.800073    7816 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 09:39:50.801247    7816 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-291800 san=[127.0.0.1 172.18.206.170 addons-291800 localhost minikube]
	I0709 09:39:50.879379    7816 provision.go:177] copyRemoteCerts
	I0709 09:39:50.889980    7816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 09:39:50.889980    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:52.896275    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:52.896365    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:52.896433    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:39:55.336110    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:39:55.346771    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:55.346771    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:39:55.453862    7816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5638775s)
	I0709 09:39:55.454399    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0709 09:39:55.489537    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 09:39:55.542390    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 09:39:55.588766    7816 provision.go:87] duration metric: took 13.7561553s to configureAuth
	I0709 09:39:55.588766    7816 buildroot.go:189] setting minikube options for container-runtime
	I0709 09:39:55.589578    7816 config.go:182] Loaded profile config "addons-291800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 09:39:55.589669    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:39:57.610359    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:39:57.621325    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:39:57.621325    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:00.099620    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:00.099620    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:00.104584    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:40:00.105002    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:40:00.105002    7816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 09:40:00.239067    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 09:40:00.239067    7816 buildroot.go:70] root file system type: tmpfs
	I0709 09:40:00.239709    7816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 09:40:00.239798    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:02.303036    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:02.303036    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:02.317093    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:04.729051    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:04.729051    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:04.735038    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:40:04.735902    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:40:04.735902    7816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 09:40:04.892089    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 09:40:04.892322    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:06.928801    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:06.929053    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:06.929053    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:09.360670    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:09.371718    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:09.377723    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:40:09.377849    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:40:09.377849    7816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 09:40:11.484419    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 09:40:11.484506    7816 machine.go:97] duration metric: took 43.4788764s to provisionDockerMachine
	I0709 09:40:11.484506    7816 client.go:171] duration metric: took 1m53.5408797s to LocalClient.Create
	I0709 09:40:11.484573    7816 start.go:167] duration metric: took 1m53.5409122s to libmachine.API.Create "addons-291800"
	I0709 09:40:11.484640    7816 start.go:293] postStartSetup for "addons-291800" (driver="hyperv")
	I0709 09:40:11.484640    7816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 09:40:11.497587    7816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 09:40:11.497587    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:13.559254    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:13.559254    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:13.569877    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:15.996331    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:15.996331    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:16.006841    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:40:16.113912    7816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6163206s)
	I0709 09:40:16.127301    7816 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 09:40:16.130139    7816 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 09:40:16.130139    7816 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 09:40:16.135712    7816 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 09:40:16.136138    7816 start.go:296] duration metric: took 4.6514928s for postStartSetup
	I0709 09:40:16.136459    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:18.178224    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:18.178224    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:18.178629    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:20.610355    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:20.620859    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:20.621086    7816 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\config.json ...
	I0709 09:40:20.624032    7816 start.go:128] duration metric: took 2m2.6832442s to createHost
	I0709 09:40:20.624180    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:22.662042    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:22.662042    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:22.670503    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:25.096769    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:25.096769    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:25.114643    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:40:25.115369    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:40:25.115369    7816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 09:40:25.246993    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720543225.250472063
	
	I0709 09:40:25.246993    7816 fix.go:216] guest clock: 1720543225.250472063
	I0709 09:40:25.246993    7816 fix.go:229] Guest: 2024-07-09 09:40:25.250472063 -0700 PDT Remote: 2024-07-09 09:40:20.6241313 -0700 PDT m=+128.056696301 (delta=4.626340763s)
	I0709 09:40:25.247162    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:27.265958    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:27.265958    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:27.277060    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:29.665330    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:29.665330    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:29.682436    7816 main.go:141] libmachine: Using SSH client type: native
	I0709 09:40:29.682986    7816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.170 22 <nil> <nil>}
	I0709 09:40:29.683117    7816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720543225
	I0709 09:40:29.834804    7816 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 16:40:25 UTC 2024
	
	I0709 09:40:29.834804    7816 fix.go:236] clock set: Tue Jul  9 16:40:25 UTC 2024
	 (err=<nil>)
	I0709 09:40:29.834804    7816 start.go:83] releasing machines lock for "addons-291800", held for 2m11.8946327s
	I0709 09:40:29.834804    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:31.870477    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:31.870477    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:31.881181    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:34.288879    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:34.300005    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:34.304645    7816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 09:40:34.304849    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:34.315096    7816 ssh_runner.go:195] Run: cat /version.json
	I0709 09:40:34.315096    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:40:36.416485    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:36.416485    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:36.416602    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:36.418786    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:40:36.419084    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:36.419206    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:40:38.964619    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:38.964619    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:38.976166    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:40:38.996395    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:40:38.996395    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:40:38.998039    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:40:39.142850    7816 ssh_runner.go:235] Completed: cat /version.json: (4.8268809s)
	I0709 09:40:39.142850    7816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8372427s)
	I0709 09:40:39.153496    7816 ssh_runner.go:195] Run: systemctl --version
	I0709 09:40:39.173545    7816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0709 09:40:39.182366    7816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 09:40:39.193434    7816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 09:40:39.219122    7816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 09:40:39.219122    7816 start.go:494] detecting cgroup driver to use...
	I0709 09:40:39.219122    7816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 09:40:39.263154    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 09:40:39.296436    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 09:40:39.314062    7816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 09:40:39.326617    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 09:40:39.359342    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 09:40:39.389513    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 09:40:39.422239    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 09:40:39.452696    7816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 09:40:39.483263    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 09:40:39.512020    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 09:40:39.541686    7816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 09:40:39.570554    7816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 09:40:39.600001    7816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 09:40:39.631517    7816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 09:40:39.816803    7816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 09:40:39.845649    7816 start.go:494] detecting cgroup driver to use...
	I0709 09:40:39.859972    7816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 09:40:39.895569    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 09:40:39.927199    7816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 09:40:39.970378    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 09:40:40.002411    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 09:40:40.033493    7816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 09:40:40.100906    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 09:40:40.123436    7816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 09:40:40.169298    7816 ssh_runner.go:195] Run: which cri-dockerd
	I0709 09:40:40.189480    7816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 09:40:40.206849    7816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 09:40:40.247723    7816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 09:40:40.428020    7816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 09:40:40.592503    7816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 09:40:40.592503    7816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 09:40:40.634929    7816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 09:40:40.805702    7816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 09:40:43.343815    7816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5381103s)
	I0709 09:40:43.358846    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 09:40:43.392301    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 09:40:43.424896    7816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 09:40:43.605823    7816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 09:40:43.802364    7816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 09:40:43.987756    7816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 09:40:44.031824    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 09:40:44.069816    7816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 09:40:44.253803    7816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 09:40:44.355736    7816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 09:40:44.368740    7816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 09:40:44.376208    7816 start.go:562] Will wait 60s for crictl version
	I0709 09:40:44.389943    7816 ssh_runner.go:195] Run: which crictl
	I0709 09:40:44.410063    7816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 09:40:44.457193    7816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 09:40:44.467139    7816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 09:40:44.509841    7816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 09:40:44.546643    7816 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 09:40:44.546697    7816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 09:40:44.550887    7816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 09:40:44.550887    7816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 09:40:44.550887    7816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 09:40:44.550887    7816 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 09:40:44.553280    7816 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 09:40:44.553280    7816 ip.go:210] interface addr: 172.18.192.1/20
	I0709 09:40:44.567336    7816 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 09:40:44.569229    7816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 09:40:44.593757    7816 kubeadm.go:877] updating cluster {Name:addons-291800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:addons-291800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 09:40:44.593757    7816 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 09:40:44.605801    7816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 09:40:44.632537    7816 docker.go:685] Got preloaded images: 
	I0709 09:40:44.632537    7816 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 09:40:44.643509    7816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 09:40:44.674463    7816 ssh_runner.go:195] Run: which lz4
	I0709 09:40:44.693767    7816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 09:40:44.695713    7816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 09:40:44.699771    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 09:40:46.349208    7816 docker.go:649] duration metric: took 1.6688092s to copy over tarball
	I0709 09:40:46.360341    7816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 09:40:51.465664    7816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.1053175s)
	I0709 09:40:51.465664    7816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 09:40:51.532520    7816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 09:40:51.551695    7816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 09:40:51.596733    7816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 09:40:51.789675    7816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 09:40:57.417697    7816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6280154s)
	I0709 09:40:57.429118    7816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 09:40:57.462552    7816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 09:40:57.462643    7816 cache_images.go:84] Images are preloaded, skipping loading
	I0709 09:40:57.462716    7816 kubeadm.go:928] updating node { 172.18.206.170 8443 v1.30.2 docker true true} ...
	I0709 09:40:57.463160    7816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-291800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-291800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 09:40:57.472505    7816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 09:40:57.514923    7816 cni.go:84] Creating CNI manager for ""
	I0709 09:40:57.514923    7816 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 09:40:57.514923    7816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 09:40:57.514923    7816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.170 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-291800 NodeName:addons-291800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 09:40:57.516468    7816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-291800"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 09:40:57.527515    7816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 09:40:57.547925    7816 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 09:40:57.560877    7816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 09:40:57.564026    7816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0709 09:40:57.610501    7816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 09:40:57.639697    7816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0709 09:40:57.680988    7816 ssh_runner.go:195] Run: grep 172.18.206.170	control-plane.minikube.internal$ /etc/hosts
	I0709 09:40:57.684266    7816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 09:40:57.722286    7816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 09:40:57.926158    7816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 09:40:57.956737    7816 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800 for IP: 172.18.206.170
	I0709 09:40:57.956883    7816 certs.go:194] generating shared ca certs ...
	I0709 09:40:57.956883    7816 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:57.957303    7816 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 09:40:58.018745    7816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0709 09:40:58.018745    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:58.020417    7816 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0709 09:40:58.020417    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:58.022073    7816 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 09:40:58.219204    7816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0709 09:40:58.219204    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:58.223712    7816 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0709 09:40:58.223712    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:58.225126    7816 certs.go:256] generating profile certs ...
	I0709 09:40:58.226522    7816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.key
	I0709 09:40:58.226522    7816 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt with IP's: []
	I0709 09:40:58.935538    7816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt ...
	I0709 09:40:58.935538    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: {Name:mkc453e6519956e4e3f96458fdfbbeff33382b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:58.939665    7816 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.key ...
	I0709 09:40:58.939665    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.key: {Name:mkf747e1eac0ba759bc4e048e8fa8c61bd868762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:58.941115    7816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.key.8c9b2900
	I0709 09:40:58.942141    7816 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.crt.8c9b2900 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.170]
	I0709 09:40:59.147439    7816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.crt.8c9b2900 ...
	I0709 09:40:59.157546    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.crt.8c9b2900: {Name:mk2b3a12896c9cac57271c81c3a9938923f3f45a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:59.157836    7816 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.key.8c9b2900 ...
	I0709 09:40:59.157836    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.key.8c9b2900: {Name:mkab7878b675694446939c2e55a2386f576eae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:59.159406    7816 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.crt.8c9b2900 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.crt
	I0709 09:40:59.168358    7816 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.key.8c9b2900 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.key
	I0709 09:40:59.172283    7816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.key
	I0709 09:40:59.173347    7816 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.crt with IP's: []
	I0709 09:40:59.313783    7816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.crt ...
	I0709 09:40:59.313783    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.crt: {Name:mk3e277cef1fb4ecee5acdf00ccde3981ed23985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:59.319832    7816 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.key ...
	I0709 09:40:59.319832    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.key: {Name:mkd776edbebcc38ee07475a50c69fc4f74b3bb10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:40:59.324335    7816 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 09:40:59.332653    7816 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 09:40:59.332959    7816 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 09:40:59.333182    7816 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 09:40:59.334792    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 09:40:59.376991    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 09:40:59.418004    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 09:40:59.466343    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 09:40:59.506205    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0709 09:40:59.545507    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 09:40:59.587672    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 09:40:59.628212    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0709 09:40:59.669508    7816 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 09:40:59.708440    7816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 09:40:59.747785    7816 ssh_runner.go:195] Run: openssl version
	I0709 09:40:59.767579    7816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 09:40:59.796721    7816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 09:40:59.803535    7816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 09:40:59.814701    7816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 09:40:59.835790    7816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 09:40:59.865534    7816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 09:40:59.873386    7816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 09:40:59.873872    7816 kubeadm.go:391] StartCluster: {Name:addons-291800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:addons-291800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 09:40:59.881785    7816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 09:40:59.920280    7816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 09:40:59.951264    7816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 09:40:59.981348    7816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 09:40:59.997390    7816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 09:40:59.997468    7816 kubeadm.go:156] found existing configuration files:
	
	I0709 09:41:00.009119    7816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 09:41:00.029674    7816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 09:41:00.040609    7816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 09:41:00.072068    7816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 09:41:00.091964    7816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 09:41:00.106037    7816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 09:41:00.133798    7816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 09:41:00.149169    7816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 09:41:00.162257    7816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 09:41:00.192609    7816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 09:41:00.208752    7816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 09:41:00.220438    7816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 09:41:00.238872    7816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 09:41:00.471013    7816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 09:41:13.799427    7816 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 09:41:13.799805    7816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 09:41:13.800014    7816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 09:41:13.800284    7816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 09:41:13.800634    7816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 09:41:13.800734    7816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 09:41:13.803469    7816 out.go:204]   - Generating certificates and keys ...
	I0709 09:41:13.803679    7816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 09:41:13.803833    7816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 09:41:13.804005    7816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 09:41:13.804241    7816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 09:41:13.804241    7816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 09:41:13.804241    7816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 09:41:13.804241    7816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 09:41:13.805278    7816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-291800 localhost] and IPs [172.18.206.170 127.0.0.1 ::1]
	I0709 09:41:13.805397    7816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 09:41:13.805618    7816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-291800 localhost] and IPs [172.18.206.170 127.0.0.1 ::1]
	I0709 09:41:13.805618    7816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 09:41:13.805618    7816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 09:41:13.805618    7816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 09:41:13.806311    7816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 09:41:13.806523    7816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 09:41:13.806605    7816 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 09:41:13.806605    7816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 09:41:13.806605    7816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 09:41:13.806605    7816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 09:41:13.807207    7816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 09:41:13.807528    7816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 09:41:13.813228    7816 out.go:204]   - Booting up control plane ...
	I0709 09:41:13.813715    7816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 09:41:13.813835    7816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 09:41:13.813835    7816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 09:41:13.813835    7816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 09:41:13.814452    7816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 09:41:13.814592    7816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 09:41:13.814787    7816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 09:41:13.814998    7816 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 09:41:13.814998    7816 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001135351s
	I0709 09:41:13.814998    7816 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 09:41:13.814998    7816 kubeadm.go:309] [api-check] The API server is healthy after 6.502992486s
	I0709 09:41:13.815680    7816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 09:41:13.815680    7816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 09:41:13.815680    7816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 09:41:13.816464    7816 kubeadm.go:309] [mark-control-plane] Marking the node addons-291800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 09:41:13.816701    7816 kubeadm.go:309] [bootstrap-token] Using token: 22o3xx.ohbfa5z2e4g2m810
	I0709 09:41:13.816869    7816 out.go:204]   - Configuring RBAC rules ...
	I0709 09:41:13.822011    7816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 09:41:13.822011    7816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 09:41:13.822542    7816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 09:41:13.822719    7816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 09:41:13.822949    7816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 09:41:13.823285    7816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 09:41:13.823464    7816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 09:41:13.823610    7816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 09:41:13.823610    7816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 09:41:13.823610    7816 kubeadm.go:309] 
	I0709 09:41:13.823610    7816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 09:41:13.823610    7816 kubeadm.go:309] 
	I0709 09:41:13.823610    7816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 09:41:13.823610    7816 kubeadm.go:309] 
	I0709 09:41:13.823610    7816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 09:41:13.824205    7816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 09:41:13.824256    7816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 09:41:13.824408    7816 kubeadm.go:309] 
	I0709 09:41:13.824564    7816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 09:41:13.824564    7816 kubeadm.go:309] 
	I0709 09:41:13.824564    7816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 09:41:13.824564    7816 kubeadm.go:309] 
	I0709 09:41:13.824811    7816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 09:41:13.825026    7816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 09:41:13.825239    7816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 09:41:13.825239    7816 kubeadm.go:309] 
	I0709 09:41:13.825417    7816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 09:41:13.825561    7816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 09:41:13.825561    7816 kubeadm.go:309] 
	I0709 09:41:13.825561    7816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 22o3xx.ohbfa5z2e4g2m810 \
	I0709 09:41:13.825561    7816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 09:41:13.825561    7816 kubeadm.go:309] 	--control-plane 
	I0709 09:41:13.825561    7816 kubeadm.go:309] 
	I0709 09:41:13.826296    7816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 09:41:13.826296    7816 kubeadm.go:309] 
	I0709 09:41:13.826466    7816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 22o3xx.ohbfa5z2e4g2m810 \
	I0709 09:41:13.826767    7816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 09:41:13.826767    7816 cni.go:84] Creating CNI manager for ""
	I0709 09:41:13.826767    7816 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 09:41:13.828967    7816 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0709 09:41:13.844400    7816 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0709 09:41:13.864379    7816 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0709 09:41:13.900812    7816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 09:41:13.918014    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-291800 minikube.k8s.io/updated_at=2024_07_09T09_41_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=addons-291800 minikube.k8s.io/primary=true
	I0709 09:41:13.918014    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:13.941867    7816 ops.go:34] apiserver oom_adj: -16
	I0709 09:41:14.119872    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:14.629311    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:15.120329    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:15.633252    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:16.120653    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:16.632383    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:17.119793    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:17.621067    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:18.123693    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:18.622327    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:19.126190    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:19.623481    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:20.145493    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:20.629408    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:21.125321    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:21.630292    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:22.131657    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:22.630818    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:23.125785    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:23.623510    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:24.119583    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:24.626968    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:25.121708    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:25.627600    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:26.120834    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:26.631940    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:27.121007    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:27.636915    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:28.124780    7816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 09:41:28.268862    7816 kubeadm.go:1107] duration metric: took 14.3680333s to wait for elevateKubeSystemPrivileges
	W0709 09:41:28.268862    7816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 09:41:28.268862    7816 kubeadm.go:393] duration metric: took 28.395043s to StartCluster
	I0709 09:41:28.268862    7816 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:41:28.268862    7816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 09:41:28.270791    7816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:41:28.272991    7816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 09:41:28.272991    7816 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.170 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 09:41:28.272991    7816 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0709 09:41:28.273570    7816 addons.go:69] Setting yakd=true in profile "addons-291800"
	I0709 09:41:28.273570    7816 addons.go:234] Setting addon yakd=true in "addons-291800"
	I0709 09:41:28.273570    7816 config.go:182] Loaded profile config "addons-291800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 09:41:28.273570    7816 addons.go:69] Setting inspektor-gadget=true in profile "addons-291800"
	I0709 09:41:28.273570    7816 addons.go:234] Setting addon inspektor-gadget=true in "addons-291800"
	I0709 09:41:28.273570    7816 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-291800"
	I0709 09:41:28.273570    7816 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-291800"
	I0709 09:41:28.273570    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.273570    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.274308    7816 addons.go:69] Setting gcp-auth=true in profile "addons-291800"
	I0709 09:41:28.274308    7816 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-291800"
	I0709 09:41:28.274308    7816 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-291800"
	I0709 09:41:28.274308    7816 mustload.go:65] Loading cluster: addons-291800
	I0709 09:41:28.274717    7816 addons.go:69] Setting volcano=true in profile "addons-291800"
	I0709 09:41:28.274717    7816 addons.go:69] Setting ingress=true in profile "addons-291800"
	I0709 09:41:28.274717    7816 addons.go:234] Setting addon volcano=true in "addons-291800"
	I0709 09:41:28.274717    7816 addons.go:234] Setting addon ingress=true in "addons-291800"
	I0709 09:41:28.274935    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.275045    7816 addons.go:69] Setting storage-provisioner=true in profile "addons-291800"
	I0709 09:41:28.275045    7816 addons.go:234] Setting addon storage-provisioner=true in "addons-291800"
	I0709 09:41:28.275167    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.275167    7816 config.go:182] Loaded profile config "addons-291800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 09:41:28.275167    7816 addons.go:69] Setting helm-tiller=true in profile "addons-291800"
	I0709 09:41:28.274717    7816 addons.go:69] Setting registry=true in profile "addons-291800"
	I0709 09:41:28.273570    7816 addons.go:69] Setting metrics-server=true in profile "addons-291800"
	I0709 09:41:28.275383    7816 addons.go:234] Setting addon metrics-server=true in "addons-291800"
	I0709 09:41:28.274935    7816 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-291800"
	I0709 09:41:28.275608    7816 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-291800"
	I0709 09:41:28.275167    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.275802    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.275167    7816 addons.go:69] Setting default-storageclass=true in profile "addons-291800"
	I0709 09:41:28.276074    7816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-291800"
	I0709 09:41:28.275167    7816 addons.go:69] Setting ingress-dns=true in profile "addons-291800"
	I0709 09:41:28.276214    7816 addons.go:234] Setting addon ingress-dns=true in "addons-291800"
	I0709 09:41:28.276357    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.275608    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.275167    7816 addons.go:69] Setting cloud-spanner=true in profile "addons-291800"
	I0709 09:41:28.276872    7816 addons.go:234] Setting addon cloud-spanner=true in "addons-291800"
	I0709 09:41:28.275325    7816 addons.go:234] Setting addon helm-tiller=true in "addons-291800"
	I0709 09:41:28.276957    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.277156    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.275325    7816 addons.go:234] Setting addon registry=true in "addons-291800"
	I0709 09:41:28.277415    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.274717    7816 addons.go:69] Setting volumesnapshots=true in profile "addons-291800"
	I0709 09:41:28.277735    7816 addons.go:234] Setting addon volumesnapshots=true in "addons-291800"
	I0709 09:41:28.273570    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.276274    7816 out.go:177] * Verifying Kubernetes components...
	I0709 09:41:28.278778    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:28.284762    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.285376    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.285690    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.287199    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.288938    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.290374    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.290374    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.290374    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.290374    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.291291    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.291501    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.292045    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.293483    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.293483    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.293483    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.293483    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:28.311593    7816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 09:41:30.013881    7816 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.7408873s)
	I0709 09:41:30.013881    7816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 09:41:30.013881    7816 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.7021702s)
	I0709 09:41:30.065388    7816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 09:41:31.360964    7816 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2955745s)
	I0709 09:41:31.360964    7816 node_ready.go:35] waiting up to 6m0s for node "addons-291800" to be "Ready" ...
	I0709 09:41:31.360964    7816 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.3470822s)
	I0709 09:41:31.360964    7816 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 09:41:31.657647    7816 node_ready.go:49] node "addons-291800" has status "Ready":"True"
	I0709 09:41:31.657647    7816 node_ready.go:38] duration metric: took 296.6824ms for node "addons-291800" to be "Ready" ...
	I0709 09:41:31.657647    7816 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 09:41:31.821674    7816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-m4mrd" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:32.329432    7816 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-291800" context rescaled to 1 replicas
	I0709 09:41:34.410827    7816 pod_ready.go:102] pod "coredns-7db6d8ff4d-m4mrd" in "kube-system" namespace has status "Ready":"False"
	I0709 09:41:35.265376    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.265376    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.271868    7816 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0709 09:41:35.276459    7816 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0709 09:41:35.276459    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0709 09:41:35.276459    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.298292    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.298292    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.300632    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.300632    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.302561    7816 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0709 09:41:35.306308    7816 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.1
	I0709 09:41:35.313182    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.315879    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.314725    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.315999    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.315999    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:35.318401    7816 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0709 09:41:35.318436    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0709 09:41:35.318436    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.320322    7816 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0709 09:41:35.320322    7816 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0709 09:41:35.320322    7816 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0709 09:41:35.320322    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.325095    7816 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0709 09:41:35.325095    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.325095    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.327045    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.330559    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.333690    7816 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0709 09:41:35.336553    7816 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0709 09:41:35.336941    7816 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0709 09:41:35.340891    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.344192    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.344192    7816 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0709 09:41:35.344263    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0709 09:41:35.344394    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.345168    7816 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0709 09:41:35.346478    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0709 09:41:35.346478    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.346478    7816 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0709 09:41:35.346478    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0709 09:41:35.346478    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.348779    7816 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0709 09:41:35.353272    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.353272    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.353272    7816 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0709 09:41:35.353910    7816 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0709 09:41:35.353910    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.356231    7816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 09:41:35.357651    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.357651    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.358919    7816 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 09:41:35.358919    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 09:41:35.358919    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.361543    7816 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0709 09:41:35.363969    7816 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0709 09:41:35.368093    7816 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0709 09:41:35.371661    7816 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0709 09:41:35.371927    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0709 09:41:35.372054    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.384973    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.384973    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.388756    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0709 09:41:35.391812    7816 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0709 09:41:35.391812    7816 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0709 09:41:35.391812    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.803466    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.803466    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.804697    7816 addons.go:234] Setting addon default-storageclass=true in "addons-291800"
	I0709 09:41:35.804697    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:35.808404    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.827575    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.827655    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.827655    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.827655    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.831176    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0709 09:41:35.831176    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.831176    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.842543    7816 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0709 09:41:35.843677    7816 out.go:177]   - Using image docker.io/registry:2.8.3
	I0709 09:41:35.844576    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:35.844576    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0709 09:41:35.849590    7816 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0709 09:41:35.850215    7816 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0709 09:41:35.850279    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.852050    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:35.855360    7816 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-291800"
	I0709 09:41:35.855481    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:35.859269    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.870992    7816 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0709 09:41:35.883322    7816 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0709 09:41:35.883383    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0709 09:41:35.883491    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:35.885759    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0709 09:41:35.902208    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0709 09:41:35.908087    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0709 09:41:35.919286    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0709 09:41:35.936536    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0709 09:41:35.970678    7816 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0709 09:41:35.985584    7816 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0709 09:41:35.985584    7816 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0709 09:41:35.985584    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:37.087912    7816 pod_ready.go:102] pod "coredns-7db6d8ff4d-m4mrd" in "kube-system" namespace has status "Ready":"False"
	I0709 09:41:39.839635    7816 pod_ready.go:102] pod "coredns-7db6d8ff4d-m4mrd" in "kube-system" namespace has status "Ready":"False"
	I0709 09:41:41.380375    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:41.380375    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:41.380375    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:41.514732    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:41.514732    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:41.514732    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:41.800669    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:41.800669    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:41.800669    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:41.866015    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:41.866015    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:41.866015    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:41.911781    7816 pod_ready.go:92] pod "coredns-7db6d8ff4d-m4mrd" in "kube-system" namespace has status "Ready":"True"
	I0709 09:41:41.911781    7816 pod_ready.go:81] duration metric: took 10.0900962s for pod "coredns-7db6d8ff4d-m4mrd" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:41.911781    7816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfnh4" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:41.932447    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:41.933077    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:41.933077    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:41.932447    7816 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfnh4" in "kube-system" namespace has status "Ready":"True"
	I0709 09:41:41.933691    7816 pod_ready.go:81] duration metric: took 21.9098ms for pod "coredns-7db6d8ff4d-mfnh4" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:41.933691    7816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:41.934462    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:41.934534    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:41.934601    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:41.953303    7816 pod_ready.go:92] pod "etcd-addons-291800" in "kube-system" namespace has status "Ready":"True"
	I0709 09:41:41.953303    7816 pod_ready.go:81] duration metric: took 19.6123ms for pod "etcd-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:41.953303    7816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:41.985042    7816 pod_ready.go:92] pod "kube-apiserver-addons-291800" in "kube-system" namespace has status "Ready":"True"
	I0709 09:41:41.985042    7816 pod_ready.go:81] duration metric: took 31.7383ms for pod "kube-apiserver-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:41.985042    7816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:42.013421    7816 pod_ready.go:92] pod "kube-controller-manager-addons-291800" in "kube-system" namespace has status "Ready":"True"
	I0709 09:41:42.013478    7816 pod_ready.go:81] duration metric: took 28.4365ms for pod "kube-controller-manager-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:42.013478    7816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c6xgn" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:42.077855    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.077855    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.077855    7816 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 09:41:42.077855    7816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 09:41:42.077855    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:42.135463    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.135463    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.135463    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:42.253100    7816 pod_ready.go:92] pod "kube-proxy-c6xgn" in "kube-system" namespace has status "Ready":"True"
	I0709 09:41:42.253100    7816 pod_ready.go:81] duration metric: took 239.6218ms for pod "kube-proxy-c6xgn" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:42.253100    7816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:42.300682    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.300682    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.300682    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:42.367342    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.367342    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.367342    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:42.425686    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.425686    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.435170    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:42.635594    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.635594    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.635594    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:42.643979    7816 pod_ready.go:92] pod "kube-scheduler-addons-291800" in "kube-system" namespace has status "Ready":"True"
	I0709 09:41:42.643979    7816 pod_ready.go:81] duration metric: took 390.8786ms for pod "kube-scheduler-addons-291800" in "kube-system" namespace to be "Ready" ...
	I0709 09:41:42.643979    7816 pod_ready.go:38] duration metric: took 10.9863203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 09:41:42.643979    7816 api_server.go:52] waiting for apiserver process to appear ...
	I0709 09:41:42.669486    7816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 09:41:42.770297    7816 api_server.go:72] duration metric: took 14.4972898s to wait for apiserver process to appear ...
	I0709 09:41:42.770297    7816 api_server.go:88] waiting for apiserver healthz status ...
	I0709 09:41:42.770297    7816 api_server.go:253] Checking apiserver healthz at https://172.18.206.170:8443/healthz ...
	I0709 09:41:42.782221    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.782221    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.782221    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:42.790210    7816 api_server.go:279] https://172.18.206.170:8443/healthz returned 200:
	ok
	I0709 09:41:42.793546    7816 api_server.go:141] control plane version: v1.30.2
	I0709 09:41:42.793546    7816 api_server.go:131] duration metric: took 23.2491ms to wait for apiserver health ...
	I0709 09:41:42.793546    7816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 09:41:42.796329    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.796329    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.798638    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:42.848069    7816 system_pods.go:59] 7 kube-system pods found
	I0709 09:41:42.848185    7816 system_pods.go:61] "coredns-7db6d8ff4d-m4mrd" [dade2ea0-4fe2-4e94-8719-ed1b1cdaca69] Running
	I0709 09:41:42.848185    7816 system_pods.go:61] "coredns-7db6d8ff4d-mfnh4" [0c8222ac-06b6-4a90-b18b-99b8bce52375] Running
	I0709 09:41:42.848185    7816 system_pods.go:61] "etcd-addons-291800" [2e6d67e7-7231-423e-8d35-4d4b29918f99] Running
	I0709 09:41:42.848250    7816 system_pods.go:61] "kube-apiserver-addons-291800" [4b3d20cd-2603-4623-8056-273388ba12f8] Running
	I0709 09:41:42.848250    7816 system_pods.go:61] "kube-controller-manager-addons-291800" [2edc4ac9-e8d9-4935-aa16-71708736f48e] Running
	I0709 09:41:42.848250    7816 system_pods.go:61] "kube-proxy-c6xgn" [56af383a-1616-4186-8287-e56bc859bf2f] Running
	I0709 09:41:42.848250    7816 system_pods.go:61] "kube-scheduler-addons-291800" [7cb1df03-b681-4528-bb67-e9632d888928] Running
	I0709 09:41:42.848250    7816 system_pods.go:74] duration metric: took 54.7034ms to wait for pod list to return data ...
	I0709 09:41:42.848250    7816 default_sa.go:34] waiting for default service account to be created ...
	I0709 09:41:42.946919    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:42.946919    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:42.985426    7816 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0709 09:41:43.011987    7816 out.go:177]   - Using image docker.io/busybox:stable
	I0709 09:41:43.027696    7816 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0709 09:41:43.027696    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0709 09:41:43.027696    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:43.057596    7816 default_sa.go:45] found service account: "default"
	I0709 09:41:43.057596    7816 default_sa.go:55] duration metric: took 209.3464ms for default service account to be created ...
	I0709 09:41:43.057596    7816 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 09:41:43.232992    7816 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0709 09:41:43.232992    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:43.277478    7816 system_pods.go:86] 7 kube-system pods found
	I0709 09:41:43.277478    7816 system_pods.go:89] "coredns-7db6d8ff4d-m4mrd" [dade2ea0-4fe2-4e94-8719-ed1b1cdaca69] Running
	I0709 09:41:43.277478    7816 system_pods.go:89] "coredns-7db6d8ff4d-mfnh4" [0c8222ac-06b6-4a90-b18b-99b8bce52375] Running
	I0709 09:41:43.277478    7816 system_pods.go:89] "etcd-addons-291800" [2e6d67e7-7231-423e-8d35-4d4b29918f99] Running
	I0709 09:41:43.277478    7816 system_pods.go:89] "kube-apiserver-addons-291800" [4b3d20cd-2603-4623-8056-273388ba12f8] Running
	I0709 09:41:43.277478    7816 system_pods.go:89] "kube-controller-manager-addons-291800" [2edc4ac9-e8d9-4935-aa16-71708736f48e] Running
	I0709 09:41:43.277478    7816 system_pods.go:89] "kube-proxy-c6xgn" [56af383a-1616-4186-8287-e56bc859bf2f] Running
	I0709 09:41:43.277478    7816 system_pods.go:89] "kube-scheduler-addons-291800" [7cb1df03-b681-4528-bb67-e9632d888928] Running
	I0709 09:41:43.277478    7816 system_pods.go:126] duration metric: took 219.8811ms to wait for k8s-apps to be running ...
	I0709 09:41:43.277478    7816 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 09:41:43.308786    7816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 09:41:43.471319    7816 system_svc.go:56] duration metric: took 193.8416ms WaitForService to wait for kubelet
	I0709 09:41:43.471319    7816 kubeadm.go:576] duration metric: took 15.1983114s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 09:41:43.471319    7816 node_conditions.go:102] verifying NodePressure condition ...
	I0709 09:41:43.502699    7816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 09:41:43.502699    7816 node_conditions.go:123] node cpu capacity is 2
	I0709 09:41:43.502699    7816 node_conditions.go:105] duration metric: took 31.3794ms to run NodePressure ...
	I0709 09:41:43.502699    7816 start.go:240] waiting for startup goroutines ...
	I0709 09:41:48.247993    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:48.249114    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:48.249114    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:48.981551    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:48.981551    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:48.984597    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.304492    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.304492    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.304492    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.336463    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0709 09:41:49.411544    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.411544    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.412652    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.547576    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.547576    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.547576    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.557620    7816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0709 09:41:49.557620    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0709 09:41:49.623042    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.623145    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.623511    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.663438    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.663438    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.663733    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.704750    7816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0709 09:41:49.704750    7816 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0709 09:41:49.719886    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.721186    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.721186    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.756376    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.756376    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.756789    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.793413    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0709 09:41:49.851213    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.851308    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.851628    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.904287    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:49.904348    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:49.904619    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:49.945266    7816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0709 09:41:49.945445    7816 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0709 09:41:49.980301    7816 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0709 09:41:49.980446    7816 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0709 09:41:50.009606    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:50.009727    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:50.010001    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:50.071794    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0709 09:41:50.084686    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:50.084779    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:50.085101    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:50.128998    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:50.129059    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:50.129183    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:50.150806    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:50.150875    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:50.151034    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:50.267287    7816 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0709 09:41:50.267858    7816 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0709 09:41:50.271843    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0709 09:41:50.325409    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0709 09:41:50.330127    7816 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0709 09:41:50.330127    7816 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0709 09:41:50.390426    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.0539614s)
	I0709 09:41:50.483181    7816 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0709 09:41:50.483334    7816 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0709 09:41:50.540086    7816 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0709 09:41:50.540188    7816 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0709 09:41:50.579383    7816 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0709 09:41:50.579460    7816 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0709 09:41:50.602075    7816 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0709 09:41:50.602075    7816 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0709 09:41:50.627917    7816 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0709 09:41:50.627917    7816 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0709 09:41:50.730202    7816 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0709 09:41:50.730334    7816 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0709 09:41:50.797445    7816 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0709 09:41:50.797495    7816 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0709 09:41:50.891156    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:50.891156    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:50.891156    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:41:50.909678    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 09:41:50.948478    7816 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0709 09:41:50.948535    7816 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0709 09:41:50.968404    7816 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0709 09:41:50.968504    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0709 09:41:51.010699    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0709 09:41:51.047947    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0709 09:41:51.072885    7816 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0709 09:41:51.072967    7816 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0709 09:41:51.133801    7816 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0709 09:41:51.133909    7816 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0709 09:41:51.242435    7816 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0709 09:41:51.242554    7816 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0709 09:41:51.340594    7816 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0709 09:41:51.340594    7816 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0709 09:41:51.423347    7816 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0709 09:41:51.423485    7816 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0709 09:41:51.586131    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0709 09:41:51.588686    7816 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0709 09:41:51.588686    7816 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0709 09:41:51.687052    7816 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0709 09:41:51.687117    7816 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0709 09:41:51.709359    7816 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0709 09:41:51.709359    7816 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0709 09:41:51.774247    7816 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0709 09:41:51.774247    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0709 09:41:51.920459    7816 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0709 09:41:51.920459    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0709 09:41:51.949350    7816 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0709 09:41:51.950518    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0709 09:41:51.970907    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:51.970962    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:51.971444    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:52.030382    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0709 09:41:52.043728    7816 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0709 09:41:52.043768    7816 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0709 09:41:52.143159    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0709 09:41:52.256979    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0709 09:41:52.460760    7816 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0709 09:41:52.460837    7816 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0709 09:41:52.676763    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.6048912s)
	I0709 09:41:52.676763    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.8833471s)
	I0709 09:41:52.804272    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 09:41:52.827183    7816 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0709 09:41:52.827183    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0709 09:41:52.990676    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:52.998942    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:53.001042    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:53.120216    7816 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0709 09:41:53.120216    7816 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0709 09:41:53.628857    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0709 09:41:53.757091    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:41:53.761854    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:53.761854    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:41:53.770650    7816 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0709 09:41:53.770650    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0709 09:41:54.780974    7816 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0709 09:41:54.781071    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0709 09:41:55.554084    7816 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0709 09:41:55.554084    7816 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0709 09:41:55.809149    7816 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0709 09:41:56.356357    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0709 09:41:56.623305    7816 addons.go:234] Setting addon gcp-auth=true in "addons-291800"
	I0709 09:41:56.623502    7816 host.go:66] Checking if "addons-291800" exists ...
	I0709 09:41:56.625006    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:41:57.063812    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.7918903s)
	I0709 09:41:57.063812    7816 addons.go:475] Verifying addon metrics-server=true in "addons-291800"
	I0709 09:41:58.759431    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:41:58.759431    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:41:58.775315    7816 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0709 09:41:58.775315    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-291800 ).state
	I0709 09:42:01.007969    7816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 09:42:01.007969    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:42:01.007969    7816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-291800 ).networkadapters[0]).ipaddresses[0]
	I0709 09:42:02.351080    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.025658s)
	I0709 09:42:02.351080    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.3403682s)
	I0709 09:42:02.351080    7816 addons.go:475] Verifying addon ingress=true in "addons-291800"
	I0709 09:42:02.351080    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.4413896s)
	I0709 09:42:02.353900    7816 out.go:177] * Verifying ingress addon...
	I0709 09:42:02.361593    7816 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0709 09:42:02.414582    7816 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0709 09:42:02.414693    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:02.985359    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:03.444531    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:03.711440    7816 main.go:141] libmachine: [stdout =====>] : 172.18.206.170
	
	I0709 09:42:03.711526    7816 main.go:141] libmachine: [stderr =====>] : 
	I0709 09:42:03.711818    7816 sshutil.go:53] new ssh client: &{IP:172.18.206.170 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-291800\id_rsa Username:docker}
	I0709 09:42:03.959617    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:04.427163    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:04.901921    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:05.298225    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.2501834s)
	I0709 09:42:05.298394    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.2678286s)
	I0709 09:42:05.298225    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.7120795s)
	I0709 09:42:05.298468    7816 addons.go:475] Verifying addon registry=true in "addons-291800"
	I0709 09:42:05.298606    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (13.1552945s)
	W0709 09:42:05.298651    7816 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0709 09:42:05.298770    7816 retry.go:31] will retry after 339.962411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0709 09:42:05.298801    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (13.0418078s)
	I0709 09:42:05.298801    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.6699318s)
	I0709 09:42:05.298801    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.4934258s)
	I0709 09:42:05.301170    7816 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-291800 service yakd-dashboard -n yakd-dashboard
	
	I0709 09:42:05.303259    7816 out.go:177] * Verifying registry addon...
	I0709 09:42:05.309963    7816 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0709 09:42:05.373686    7816 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0709 09:42:05.373752    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0709 09:42:05.394936    7816 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0709 09:42:05.434150    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:05.671486    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0709 09:42:05.856136    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:05.941990    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:06.371661    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:06.410863    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:06.530743    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.1742879s)
	I0709 09:42:06.530906    7816 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-291800"
	I0709 09:42:06.531400    7816 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.7560771s)
	I0709 09:42:06.536203    7816 out.go:177] * Verifying csi-hostpath-driver addon...
	I0709 09:42:06.541070    7816 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0709 09:42:06.547407    7816 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0709 09:42:06.552252    7816 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0709 09:42:06.553047    7816 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0709 09:42:06.553047    7816 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0709 09:42:06.580255    7816 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0709 09:42:06.580255    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:06.729333    7816 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0709 09:42:06.729333    7816 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0709 09:42:06.828376    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:06.854370    7816 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0709 09:42:06.854931    7816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0709 09:42:06.872550    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:07.009117    7816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0709 09:42:07.058428    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:07.332402    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:07.388575    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:07.562187    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:07.831270    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:07.882473    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:08.068585    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:08.333843    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:08.388864    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:08.568697    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:08.791699    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.1202092s)
	I0709 09:42:08.832953    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:08.869901    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:09.074735    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:09.189827    7816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.180621s)
	I0709 09:42:09.197620    7816 addons.go:475] Verifying addon gcp-auth=true in "addons-291800"
	I0709 09:42:09.200868    7816 out.go:177] * Verifying gcp-auth addon...
	I0709 09:42:09.205399    7816 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0709 09:42:09.226050    7816 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0709 09:42:09.326417    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:09.383583    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:09.569959    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:09.827976    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:09.879443    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:10.071519    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:10.332971    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:10.369716    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:10.575898    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:10.820868    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:10.883333    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:11.068725    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:11.321870    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:11.380683    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:11.563338    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:11.826694    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:11.868974    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:12.068340    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:12.325837    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:12.386453    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:12.567936    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:12.825928    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:12.881072    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:13.070120    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:13.324831    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:13.381224    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:13.573440    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:13.819548    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:13.878882    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:14.077643    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:14.325200    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:14.683220    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:14.686276    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:14.842212    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:14.883575    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:15.134139    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:15.326882    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:15.387381    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:15.570910    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:15.829509    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:15.882509    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:16.072687    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:16.329335    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:16.837525    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:16.837525    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:16.845196    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:16.886124    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:17.058229    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:17.393239    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:17.394817    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:17.571506    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:17.833166    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:17.880328    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:18.071472    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:18.328759    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:18.375911    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:18.579352    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:18.823939    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:18.904202    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:19.080054    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:19.327108    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:19.382802    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:19.566156    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:19.833148    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:19.880467    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:20.056700    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:20.336847    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:20.371809    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:20.571515    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:20.838151    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:20.880359    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:21.069159    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:21.338969    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:21.371351    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:21.557415    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:21.824988    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:21.868943    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:22.076794    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:22.337445    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:22.371343    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:22.571779    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:22.834356    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:22.877778    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:23.068837    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:23.326625    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:23.387488    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:23.565055    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:23.826011    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:23.888354    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:24.065552    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:24.332172    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:24.370671    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:24.563948    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:24.823316    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:24.882959    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:25.074137    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:25.332371    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:25.386256    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:25.562322    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:25.831051    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:25.872239    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:26.065387    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:26.334112    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:26.371180    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:26.568749    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:26.819360    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:26.877048    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:27.074345    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:27.329413    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:27.368445    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:27.572850    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:27.826239    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:27.884984    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:28.104864    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:28.346260    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:28.390637    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:28.571839    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:28.825415    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:28.870091    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:29.061122    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:29.329746    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:29.370827    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:29.843534    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:29.847298    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:30.030896    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:30.079113    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:30.330769    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:30.382791    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:30.566585    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:30.829992    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:30.870215    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:31.063622    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:31.316653    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:31.369988    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:31.551672    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:31.819213    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:31.880414    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:32.056589    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:32.324111    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:32.379313    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:32.564307    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:32.832207    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:32.871698    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:33.065821    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:33.326387    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:33.386086    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:33.571867    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:33.818934    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:33.881334    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:34.075120    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:34.332652    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:34.369014    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:34.572310    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:34.837238    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:34.879157    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:35.061601    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:35.994871    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:36.005014    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:36.017970    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:36.020505    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:36.030126    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:36.056474    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:36.328112    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:36.368724    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:36.695934    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:36.825296    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:36.878591    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:37.069523    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:37.331345    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:37.367361    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:37.573133    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:37.824051    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:37.880669    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:38.062547    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:38.333885    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:38.387959    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:38.569521    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:38.836663    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:38.874980    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:39.097192    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:39.331047    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:39.381143    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:39.563347    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:39.826762    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:39.867750    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:40.073324    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:40.340449    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:40.371353    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:40.562253    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:40.824840    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:40.885204    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:41.066655    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:41.334990    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:41.373097    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:41.558174    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:41.824964    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:41.882638    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:42.058692    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:42.318104    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:42.371244    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:42.567105    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:42.825530    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:42.883344    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:43.062220    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:43.324465    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:43.368737    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:43.687583    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:43.832518    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:43.868761    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:44.066039    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:44.390029    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:44.390766    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:44.568164    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:44.824449    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:44.875150    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:45.070910    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:45.326839    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:45.381001    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:45.559373    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:45.820283    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:45.873271    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:46.069495    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:46.324733    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:46.382436    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:46.567554    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:46.817108    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:46.877914    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:47.065827    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:47.335314    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:47.370741    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:47.568022    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:47.827159    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:47.888225    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:48.066757    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:48.338649    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:48.374065    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:48.561936    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:48.824501    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:48.882076    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:49.064696    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:49.322857    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:49.371031    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:49.562520    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:49.820614    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:49.887206    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:50.071273    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:50.320016    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:50.377492    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:50.556718    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:50.826764    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:50.881657    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:51.074356    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:51.331268    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:51.368240    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:51.566487    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:51.820140    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:51.875728    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:52.060946    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:52.337031    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:52.378582    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:52.576547    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:52.829929    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:52.868331    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:53.084979    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:53.325691    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:53.373633    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:53.576173    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:53.825381    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:53.882642    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:54.068040    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:54.335099    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:54.372120    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:54.573062    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:54.844544    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:54.926207    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:55.077460    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:55.325672    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:55.382921    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:55.566407    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:55.822659    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:55.880890    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:56.068486    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:56.346436    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:56.439429    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:56.556747    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:56.819990    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:56.871765    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:57.089072    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:57.338572    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:57.371475    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:57.559866    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:57.843923    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:57.874361    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:58.066995    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:58.321668    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:58.386647    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:58.558199    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:58.818337    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:58.875162    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:59.068443    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:59.328935    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:59.369567    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:42:59.560091    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:42:59.819487    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:42:59.873401    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:00.070082    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:00.326485    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:00.382962    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:00.565901    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:00.825610    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:00.872016    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:01.062695    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:01.318566    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:01.369002    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:01.560682    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:01.825196    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:01.876727    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:02.061861    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:02.490282    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:02.491689    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:02.588377    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:02.817095    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:02.872459    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:03.075821    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:03.343987    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:03.368587    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:03.576898    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:03.825249    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:03.881604    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:04.057778    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:04.345241    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:04.989593    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:04.994407    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:04.996510    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:04.996510    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:05.083527    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:05.331852    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:05.369456    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:05.569655    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:05.826463    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:05.883579    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:06.078043    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:06.332334    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:06.379993    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:06.574810    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:06.833250    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:06.884318    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:07.103670    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:07.331558    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:07.369384    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:07.571129    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:07.821105    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:07.878722    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:08.076248    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:08.333640    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:08.376514    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:08.564033    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:08.830106    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:08.870234    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:09.068652    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:09.322667    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:09.373772    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:09.568714    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:09.836982    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:09.878711    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:10.069202    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:10.334727    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:10.375776    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:10.557948    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:10.829533    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:10.888283    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:11.064497    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:11.323332    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:11.382290    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:11.570806    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:11.820456    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:11.886420    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:12.071286    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:12.324369    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:12.369157    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:12.567580    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:12.820648    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:12.875266    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:13.067707    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:13.321765    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:13.377758    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:13.566266    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:13.819966    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:13.877360    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:14.057700    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:14.335567    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:14.395237    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:14.560543    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:14.822736    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:14.887803    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:15.068445    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:15.333696    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:15.373810    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:15.570954    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:15.838595    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:15.877223    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:16.079276    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:16.397952    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:16.400605    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:16.557547    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:16.827250    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:16.879657    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:17.077774    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:17.330362    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:17.370336    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:17.937404    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:17.939217    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:17.980044    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:18.069266    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:18.329009    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:18.384336    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:18.570863    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:18.824260    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:18.868724    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:19.062152    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:19.334547    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:19.382101    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:19.557165    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:19.822951    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:19.884183    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:20.073887    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:20.331129    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:20.387487    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:20.575913    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:20.825047    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:20.874572    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:21.064746    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:21.312342    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:21.390358    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:21.566811    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:21.816281    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0709 09:43:21.870567    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:22.067937    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:22.324285    7816 kapi.go:107] duration metric: took 1m17.0142045s to wait for kubernetes.io/minikube-addons=registry ...
	I0709 09:43:22.378253    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:22.558854    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:22.878867    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:23.071308    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:23.395098    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:23.573967    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:23.872593    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:24.077681    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:24.380477    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:24.576607    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:24.868624    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:25.073467    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:25.381490    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:25.575946    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:25.879975    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:26.060936    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:26.385789    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:26.562009    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:26.868363    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:27.067533    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:27.379494    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:27.557391    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:27.879692    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:28.071680    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:28.394587    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:28.567143    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:28.871669    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:29.071533    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:29.381196    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:29.557249    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:29.877509    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:30.065285    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:30.378870    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:30.560046    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:30.875505    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:31.059924    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:31.393178    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:31.574657    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:31.878412    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:32.079599    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:32.380070    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:32.561099    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:32.871293    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:33.061414    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:33.369476    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:33.562401    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:33.878965    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:34.060176    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:34.373317    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:34.558981    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:34.867928    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:35.075834    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:35.369165    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:35.567199    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:35.871073    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:36.067105    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:36.379073    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:36.560558    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:36.882762    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:37.063747    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:37.371757    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:37.566105    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:37.875099    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:38.055001    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:38.692144    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:38.692367    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:38.883816    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:39.079507    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:39.374244    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:39.630189    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:39.882225    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:40.077616    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:40.378111    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:40.565800    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:40.883042    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:41.085859    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:41.373919    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:41.567991    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:41.868984    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:42.073074    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:42.379030    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:42.575806    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:42.872999    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:43.072104    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:43.374984    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:43.560449    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:43.887398    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:44.058540    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:44.506968    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:44.564092    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:44.884725    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:45.076367    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:45.390876    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:45.575329    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:45.876062    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:46.062960    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:46.374222    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:46.565280    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:46.891007    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:47.060653    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:47.385060    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:47.598368    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:47.876886    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:48.064776    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:48.385273    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:48.660130    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:48.891791    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:49.064641    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:49.387105    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:49.561841    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:49.881320    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:50.067291    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:50.388904    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:50.562342    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:50.871888    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:51.066546    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:51.377523    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:51.569902    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:51.889297    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:52.090363    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:52.372590    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:52.563699    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:52.873734    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:53.070606    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:53.378300    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:53.572618    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:53.874753    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:54.057269    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:54.370337    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:54.563639    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:54.872556    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:55.068232    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:55.382669    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:55.569687    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:55.876705    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:56.055336    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:56.372388    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:56.550919    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:57.019814    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:57.070866    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:57.382370    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:57.593533    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:57.870368    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:58.071805    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:58.382776    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:58.563353    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:58.872288    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:59.060112    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:59.383046    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:43:59.571259    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:43:59.875685    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:00.059001    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:00.371885    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:00.941598    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:00.948806    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:01.075149    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:01.376880    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:01.555449    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:01.877957    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:02.092103    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:02.367578    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:02.563875    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:02.890507    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:03.073208    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:03.385239    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:03.558919    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:03.883715    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:04.061260    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:04.535812    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:04.841773    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:04.884218    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:05.062048    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:05.374958    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:05.569814    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:05.881462    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:06.067545    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:06.380685    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:06.572074    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:06.880774    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:07.062230    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:07.383507    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:07.559994    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:07.986575    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:08.138648    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:08.378556    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:08.580123    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:08.872204    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:09.062621    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:09.378840    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:09.557070    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:09.892320    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:10.070148    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:10.376683    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:10.570735    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:10.872881    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:11.058848    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:11.374684    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:11.572203    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:11.869392    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:12.060034    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:12.386670    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:12.565612    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:12.872371    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:13.072450    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:13.379998    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:13.576060    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:13.876769    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:14.066907    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:14.387794    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:14.561342    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:14.873106    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:15.064100    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:15.377465    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:15.569229    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:15.882344    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:16.057650    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:16.379377    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:16.567176    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:16.873056    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:17.077701    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:17.377377    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:17.566027    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:17.878604    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:18.059444    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:18.440576    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:18.568367    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:18.881690    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:19.065220    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:19.391272    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:19.567584    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:19.874743    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:20.065525    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:20.377772    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:20.557743    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:20.889129    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:21.063046    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:21.378514    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:21.564089    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:21.880347    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:22.067241    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:22.381874    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:22.558617    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:22.883991    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:23.079175    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:23.374456    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:23.565032    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:23.877697    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:24.064502    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:24.370543    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:24.575365    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:24.881582    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:25.072312    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:25.383469    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:25.559991    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:25.868825    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:26.059405    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:26.371774    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:26.558975    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:26.867476    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:27.066023    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:27.374526    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:27.558623    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:27.891130    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:28.061475    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:28.378412    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:28.565204    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:28.873301    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:29.072693    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:29.379893    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:29.564315    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:29.876449    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:30.060666    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:30.382503    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:30.563507    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:30.870865    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:31.060464    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:31.381843    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:31.571056    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:31.880654    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:32.141357    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:32.524933    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:32.653301    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:32.881248    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:33.065766    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:33.380610    7816 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0709 09:44:33.561905    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:33.885419    7816 kapi.go:107] duration metric: took 2m31.5236538s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0709 09:44:34.070887    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:34.574218    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:35.065012    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:35.882598    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:36.058585    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:36.568024    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:37.065584    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:37.561787    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:38.065176    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:38.586077    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:39.075570    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:39.560191    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:40.062978    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:40.572150    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:41.080217    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:41.557189    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0709 09:44:42.077563    7816 kapi.go:107] duration metric: took 2m35.5299324s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0709 09:44:53.224404    7816 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0709 09:44:53.224404    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:53.720960    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:54.214411    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:54.723981    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:55.219709    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:55.735812    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:56.224972    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:56.713224    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:57.226183    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:57.717678    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:58.219418    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:58.722143    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:59.222734    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:44:59.726863    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:00.215384    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:00.719092    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:01.211996    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:01.725299    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:02.216016    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:02.725587    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:03.213048    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:03.723331    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:04.217770    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:04.714657    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:05.223318    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:05.727068    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:06.219947    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:06.717692    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:07.214863    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:07.713698    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:08.225970    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:08.713638    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:09.224451    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:09.711336    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:10.216194    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:10.720079    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:11.223791    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:11.716185    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:12.229266    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:12.726973    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:13.212510    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:13.722378    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:14.223843    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:14.716237    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:15.211842    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:15.732124    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:16.221674    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:16.722164    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:17.214764    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:17.715848    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:18.221979    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:18.716127    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:19.222271    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:19.717556    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:20.222645    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:20.713135    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:21.225298    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:21.725697    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:22.227404    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:22.718872    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:23.217258    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:23.720884    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:24.214845    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:24.720580    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:25.220959    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:25.719710    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:26.215302    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:26.728631    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:27.214473    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:27.724871    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:28.226688    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:28.724944    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:29.235895    7816 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0709 09:45:29.728148    7816 kapi.go:107] duration metric: took 3m20.5225175s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0709 09:45:29.731324    7816 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-291800 cluster.
	I0709 09:45:29.734497    7816 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0709 09:45:29.736135    7816 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0709 09:45:29.739404    7816 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, helm-tiller, storage-provisioner, volcano, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0709 09:45:29.742267    7816 addons.go:510] duration metric: took 4m1.4689983s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns metrics-server helm-tiller storage-provisioner volcano inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0709 09:45:29.742267    7816 start.go:245] waiting for cluster config update ...
	I0709 09:45:29.742267    7816 start.go:254] writing updated cluster config ...
	I0709 09:45:29.757328    7816 ssh_runner.go:195] Run: rm -f paused
	I0709 09:45:30.016618    7816 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0709 09:45:30.020811    7816 out.go:177] * Done! kubectl is now configured to use "addons-291800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 09 16:46:04 addons-291800 dockerd[1440]: time="2024-07-09T16:46:04.854732917Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:46:04 addons-291800 dockerd[1432]: time="2024-07-09T16:46:04.858259330Z" level=info msg="ignoring event" container=8c0002cbbbacb9d0597da8777a3a3d8b9166d3bec6ef5cc7d61ed73e2840bffd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 16:46:04 addons-291800 dockerd[1440]: time="2024-07-09T16:46:04.896252777Z" level=warning msg="cleanup warnings time=\"2024-07-09T16:46:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 16:46:04 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:04Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-7gkck_kube-system\": unexpected command output nsenter: cannot open /proc/4119/ns/net: No such file or directory\n with error: exit status 1"
	Jul 09 16:46:05 addons-291800 dockerd[1432]: time="2024-07-09T16:46:05.035662943Z" level=info msg="ignoring event" container=2c3f1e03a07737f927b66f06e3ee7095e64850751a0a5deb5f8b62d06e6ff982 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 16:46:05 addons-291800 dockerd[1440]: time="2024-07-09T16:46:05.035921144Z" level=info msg="shim disconnected" id=2c3f1e03a07737f927b66f06e3ee7095e64850751a0a5deb5f8b62d06e6ff982 namespace=moby
	Jul 09 16:46:05 addons-291800 dockerd[1440]: time="2024-07-09T16:46:05.036219146Z" level=warning msg="cleaning up after shim disconnected" id=2c3f1e03a07737f927b66f06e3ee7095e64850751a0a5deb5f8b62d06e6ff982 namespace=moby
	Jul 09 16:46:05 addons-291800 dockerd[1440]: time="2024-07-09T16:46:05.036331346Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 16:46:06 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:06Z" level=error msg="error getting RW layer size for container ID '1b0fead77247a9583ba26407cfac4e771815142b239f758f9f29e65e71a926f5': Error response from daemon: No such container: 1b0fead77247a9583ba26407cfac4e771815142b239f758f9f29e65e71a926f5"
	Jul 09 16:46:06 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:06Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1b0fead77247a9583ba26407cfac4e771815142b239f758f9f29e65e71a926f5'"
	Jul 09 16:46:07 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:07Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Jul 09 16:46:08 addons-291800 dockerd[1440]: time="2024-07-09T16:46:08.115242182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 16:46:08 addons-291800 dockerd[1440]: time="2024-07-09T16:46:08.115539783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 16:46:08 addons-291800 dockerd[1440]: time="2024-07-09T16:46:08.116180486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 16:46:08 addons-291800 dockerd[1440]: time="2024-07-09T16:46:08.116434788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 16:46:12 addons-291800 dockerd[1440]: time="2024-07-09T16:46:12.791640143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 16:46:12 addons-291800 dockerd[1440]: time="2024-07-09T16:46:12.792532047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 16:46:12 addons-291800 dockerd[1440]: time="2024-07-09T16:46:12.792662348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 16:46:12 addons-291800 dockerd[1440]: time="2024-07-09T16:46:12.792956649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 16:46:12 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7d4c5eae6ab0f717851bc4fe474c960b399dd64a5e9c888fea2fbcfaa3943a6e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 16:46:18 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:18Z" level=error msg="error getting RW layer size for container ID '256c83a28c1856c4a474ab75cf10d34a0d2c49a087fae9072b437168f3c46f49': Error response from daemon: No such container: 256c83a28c1856c4a474ab75cf10d34a0d2c49a087fae9072b437168f3c46f49"
	Jul 09 16:46:18 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:18Z" level=error msg="Set backoffDuration to : 1m0s for container ID '256c83a28c1856c4a474ab75cf10d34a0d2c49a087fae9072b437168f3c46f49'"
	Jul 09 16:46:18 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:18Z" level=error msg="error getting RW layer size for container ID '8a373c794473091afc618d7392a30d6c6ac5f0ef8e1e280f4bcaf9468c0569ae': Error response from daemon: No such container: 8a373c794473091afc618d7392a30d6c6ac5f0ef8e1e280f4bcaf9468c0569ae"
	Jul 09 16:46:18 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:18Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8a373c794473091afc618d7392a30d6c6ac5f0ef8e1e280f4bcaf9468c0569ae'"
	Jul 09 16:46:23 addons-291800 cri-dockerd[1330]: time="2024-07-09T16:46:23Z" level=info msg="Pulling image docker.io/nginx:latest: c6b156574604: Extracting [==================================>                ]  28.54MB/41.83MB"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	b88e2e3c3ef8a       nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                                                                19 seconds ago       Running             nginx                                    0                   29dbc4494feb5       nginx
	d06a084a65a63       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                                        26 seconds ago       Running             headlamp                                 0                   22a3cd6bcb470       headlamp-7867546754-ggw75
	8a2a065fdd3a0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 58 seconds ago       Running             gcp-auth                                 0                   436c093c22ef6       gcp-auth-5db96cd9b4-gvz77
	17c348ec4c9e0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   4a5b24635401b       csi-hostpathplugin-vfx9b
	1d128730031f4       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         About a minute ago   Running             admission                                0                   61b27cff475de       volcano-admission-5f7844f7bc-w4n74
	464937bebf438       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   4a5b24635401b       csi-hostpathplugin-vfx9b
	93a45383b9b0f       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   b447cc8ecad67       ingress-nginx-controller-768f948f8f-s682g
	9362da966b209       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            2 minutes ago        Running             liveness-probe                           0                   4a5b24635401b       csi-hostpathplugin-vfx9b
	2970be89c4f6f       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   4a5b24635401b       csi-hostpathplugin-vfx9b
	3b90ce0f60cf9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   4a5b24635401b       csi-hostpathplugin-vfx9b
	ebe98e815ed48       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   3e89253e561df       csi-hostpath-resizer-0
	a067ef4fcfde5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   4a5b24635401b       csi-hostpathplugin-vfx9b
	0d42f96ab8483       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   53f2bfae55d33       csi-hostpath-attacher-0
	61cebb530ccc2       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               2 minutes ago        Running             volcano-scheduler                        0                   d20989f099e6b       volcano-scheduler-844f6db89b-qsj6r
	78e9a79fe7019       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      2 minutes ago        Running             volcano-controllers                      0                   2c948105ed8e2       volcano-controllers-59cb4746db-psj8r
	076d41f80b8e2       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   33d3058bc1452       snapshot-controller-745499f584-5qmz6
	bf525bb96964b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   b2cacff29875c       snapshot-controller-745499f584-dplxc
	44298eff9b9f0       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    1                   5dbd506541b6d       ingress-nginx-admission-patch-zxm8c
	a44345dd4ed44       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   6c0c898ac8d82       ingress-nginx-admission-create-zqr26
	171957952b470       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   5e1067b45e15a       local-path-provisioner-8d985888d-zmvdj
	3157c587b818f       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         2 minutes ago        Exited              main                                     0                   0a81927de0151       volcano-admission-init-cq57x
	b197fc6cfc63d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        3 minutes ago        Running             yakd                                     0                   d9a52f07d190b       yakd-dashboard-799879c74f-kjjgv
	14730b1ff2ad3       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   9a838b5c61e67       metrics-server-c59844bb4-8fmmc
	76eebe5677cbd       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   642617caecdb9       cloud-spanner-emulator-6fcd4f6f98-492qg
	495d5d323c35d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             3 minutes ago        Running             minikube-ingress-dns                     0                   8b58e12d4b9c4       kube-ingress-dns-minikube
	96ab19d455381       nvcr.io/nvidia/k8s-device-plugin@sha256:c0b7a46a9203ec789173374c2886adfd424639d4be23d3c9a6a836c3b2c91c13                                     4 minutes ago        Running             nvidia-device-plugin-ctr                 0                   a19c596eb43c6       nvidia-device-plugin-daemonset-9c8jf
	825df3afa4a19       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   dbe00b15e93ed       storage-provisioner
	a6050c5b5e1f2       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   e68dcaf52f475       coredns-7db6d8ff4d-m4mrd
	34adb0c362932       53c535741fb44                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   1ff7c2bbfc407       kube-proxy-c6xgn
	36f98def341ef       56ce0fd9fb532                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   ed6dccb5f5948       kube-apiserver-addons-291800
	9c7e59ec60891       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   95e82dbd22bde       etcd-addons-291800
	bdb1112e8b279       e874818b3caac                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   e41d15116f6d2       kube-controller-manager-addons-291800
	914307c141b37       7820c83aa1394                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   459c4738ccc24       kube-scheduler-addons-291800
	
	
	==> controller_ingress [93a45383b9b0] <==
	I0709 16:44:34.769196       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-s682g", UID:"fbcba2a3-395c-4149-9f10-b750e8914ef3", APIVersion:"v1", ResourceVersion:"1354", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0709 16:45:58.352692       7 controller.go:1107] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0709 16:45:58.402719       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.05s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.05s testedConfigurationSize:18.1kB}
	I0709 16:45:58.402774       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0709 16:45:58.439121       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0709 16:45:58.441121       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"2e3b7e4c-0632-48bd-af90-d7be941a78b0", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1698", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0709 16:45:58.442342       7 controller.go:1107] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0709 16:45:58.443247       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0709 16:45:58.679100       7 controller.go:210] "Backend successfully reloaded"
	I0709 16:45:58.681932       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-s682g", UID:"fbcba2a3-395c-4149-9f10-b750e8914ef3", APIVersion:"v1", ResourceVersion:"1354", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0709 16:46:01.776631       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	I0709 16:46:01.777022       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0709 16:46:01.876326       7 controller.go:210] "Backend successfully reloaded"
	I0709 16:46:01.876943       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-s682g", UID:"fbcba2a3-395c-4149-9f10-b750e8914ef3", APIVersion:"v1", ResourceVersion:"1354", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0709 16:46:05.109784       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	W0709 16:46:24.821453       7 controller.go:1107] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0709 16:46:25.012651       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.191s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.191s testedConfigurationSize:26.0kB}
	I0709 16:46:25.012687       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0709 16:46:25.581211       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0709 16:46:25.582172       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"97b58403-b781-4657-b38c-c2870534c729", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1817", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0709 16:46:25.591686       7 controller.go:1107] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0709 16:46:25.611280       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0709 16:46:25.834436       7 controller.go:210] "Backend successfully reloaded"
	I0709 16:46:25.835281       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-s682g", UID:"fbcba2a3-395c-4149-9f10-b750e8914ef3", APIVersion:"v1", ResourceVersion:"1354", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	10.244.0.1 - - [09/Jul/2024:16:46:24 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 80 0.002 [default-nginx-80] [] 10.244.0.30:80 615 0.002 200 e760efe9217b9af02117d1938efd2c81
	
	
	==> coredns [a6050c5b5e1f] <==
	[INFO] 10.244.0.9:47788 - 3586 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000710003s
	[INFO] 10.244.0.9:52764 - 43621 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000965s
	[INFO] 10.244.0.9:52764 - 61030 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000264201s
	[INFO] 10.244.0.9:56642 - 39558 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000647s
	[INFO] 10.244.0.9:56642 - 50565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000608902s
	[INFO] 10.244.0.9:36987 - 9754 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000216701s
	[INFO] 10.244.0.9:36987 - 9236 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000500702s
	[INFO] 10.244.0.9:54892 - 23889 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000662s
	[INFO] 10.244.0.9:54892 - 30804 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001671s
	[INFO] 10.244.0.9:46874 - 41686 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000142201s
	[INFO] 10.244.0.9:46874 - 57809 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000318801s
	[INFO] 10.244.0.9:51050 - 52119 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000157701s
	[INFO] 10.244.0.9:51050 - 28820 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000499s
	[INFO] 10.244.0.9:57339 - 63771 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000515s
	[INFO] 10.244.0.9:57339 - 33557 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000276801s
	[INFO] 10.244.0.26:35567 - 48198 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000345801s
	[INFO] 10.244.0.26:56080 - 53751 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000388701s
	[INFO] 10.244.0.26:51743 - 29462 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000185601s
	[INFO] 10.244.0.26:54525 - 15092 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0000802s
	[INFO] 10.244.0.26:52818 - 59794 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102901s
	[INFO] 10.244.0.26:46196 - 11640 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000460001s
	[INFO] 10.244.0.26:50206 - 60346 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002498007s
	[INFO] 10.244.0.26:58237 - 47924 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.002200007s
	[INFO] 10.244.0.28:47171 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0004904s
	[INFO] 10.244.0.28:38818 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000550601s
	
	
	==> describe nodes <==
	Name:               addons-291800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-291800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=addons-291800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T09_41_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-291800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-291800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 16:41:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-291800
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 16:46:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 16:46:21 +0000   Tue, 09 Jul 2024 16:41:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 16:46:21 +0000   Tue, 09 Jul 2024 16:41:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 16:46:21 +0000   Tue, 09 Jul 2024 16:41:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 16:46:21 +0000   Tue, 09 Jul 2024 16:41:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.170
	  Hostname:    addons-291800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 cecaebbf2c4d4551958124d4cfa9d83e
	  System UUID:                7d011c39-b255-e540-8ce1-011cfb0c670c
	  Boot ID:                    a1c5a013-cfee-4a10-b639-55a09bf81c89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-492qg      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  default                     hello-world-app-86c47465fc-6bxgb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  gcp-auth                    gcp-auth-5db96cd9b4-gvz77                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  headlamp                    headlamp-7867546754-ggw75                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-s682g    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m26s
	  kube-system                 coredns-7db6d8ff4d-m4mrd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 csi-hostpathplugin-vfx9b                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-addons-291800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-apiserver-addons-291800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-addons-291800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-c6xgn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-addons-291800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 metrics-server-c59844bb4-8fmmc               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 nvidia-device-plugin-daemonset-9c8jf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 snapshot-controller-745499f584-5qmz6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 snapshot-controller-745499f584-dplxc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  local-path-storage          local-path-provisioner-8d985888d-zmvdj       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  volcano-system              volcano-admission-5f7844f7bc-w4n74           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  volcano-system              volcano-controllers-59cb4746db-psj8r         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  volcano-system              volcano-scheduler-844f6db89b-qsj6r           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  yakd-dashboard              yakd-dashboard-799879c74f-kjjgv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  Starting                 5m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node addons-291800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node addons-291800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node addons-291800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m14s                  kubelet          Node addons-291800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s                  kubelet          Node addons-291800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s                  kubelet          Node addons-291800 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m13s                  kubelet          Node addons-291800 status is now: NodeReady
	  Normal  RegisteredNode           5m                     node-controller  Node addons-291800 event: Registered Node addons-291800 in Controller
	
	
	==> dmesg <==
	[  +6.028215] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.100097] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.182287] kauditd_printk_skb: 18 callbacks suppressed
	[Jul 9 16:42] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.049848] kauditd_printk_skb: 113 callbacks suppressed
	[ +11.493981] kauditd_printk_skb: 76 callbacks suppressed
	[ +39.877274] kauditd_printk_skb: 6 callbacks suppressed
	[Jul 9 16:43] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.305577] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.020887] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.537308] kauditd_printk_skb: 26 callbacks suppressed
	[Jul 9 16:44] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.007645] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.012815] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.962189] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.919207] kauditd_printk_skb: 45 callbacks suppressed
	[Jul 9 16:45] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.741846] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.414417] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.248646] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.302923] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.031853] kauditd_printk_skb: 6 callbacks suppressed
	[Jul 9 16:46] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.735753] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.262853] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9c7e59ec6089] <==
	{"level":"info","ts":"2024-07-09T16:46:25.580166Z","caller":"traceutil/trace.go:171","msg":"trace[498346636] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1820; }","duration":"317.241346ms","start":"2024-07-09T16:46:25.262916Z","end":"2024-07-09T16:46:25.580158Z","steps":["trace[498346636] 'agreement among raft nodes before linearized reading'  (duration: 317.086245ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:25.580383Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-09T16:46:25.262908Z","time spent":"317.332046ms","remote":"127.0.0.1:34966","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":11498,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-07-09T16:46:25.593708Z","caller":"traceutil/trace.go:171","msg":"trace[1490103449] transaction","detail":"{read_only:false; response_revision:1820; number_of_response:1; }","duration":"325.688679ms","start":"2024-07-09T16:46:25.268001Z","end":"2024-07-09T16:46:25.59369Z","steps":["trace[1490103449] 'process raft request'  (duration: 311.910525ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:25.593803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-09T16:46:25.267992Z","time spent":"325.756579ms","remote":"127.0.0.1:35292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1891,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/default/hello-world-app-86c47465fc\" mod_revision:1813 > success:<request_put:<key:\"/registry/replicasets/default/hello-world-app-86c47465fc\" value_size:1827 >> failure:<request_range:<key:\"/registry/replicasets/default/hello-world-app-86c47465fc\" > >"}
	{"level":"warn","ts":"2024-07-09T16:46:25.59399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.936306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-09T16:46:25.594016Z","caller":"traceutil/trace.go:171","msg":"trace[737529495] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1820; }","duration":"128.988306ms","start":"2024-07-09T16:46:25.46502Z","end":"2024-07-09T16:46:25.594009Z","steps":["trace[737529495] 'agreement among raft nodes before linearized reading'  (duration: 128.896806ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:46:25.584251Z","caller":"traceutil/trace.go:171","msg":"trace[158202141] transaction","detail":"{read_only:false; response_revision:1819; number_of_response:1; }","duration":"319.925057ms","start":"2024-07-09T16:46:25.26426Z","end":"2024-07-09T16:46:25.584185Z","steps":["trace[158202141] 'process raft request'  (duration: 315.61884ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:25.594278Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-09T16:46:25.26425Z","time spent":"330.000696ms","remote":"127.0.0.1:34868","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":725,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/hello-world-app-86c47465fc.17e099466351dc4a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/hello-world-app-86c47465fc.17e099466351dc4a\" value_size:639 lease:4390605670832569936 >> failure:<>"}
	{"level":"info","ts":"2024-07-09T16:46:25.595044Z","caller":"traceutil/trace.go:171","msg":"trace[1184459832] transaction","detail":"{read_only:false; response_revision:1818; number_of_response:1; }","duration":"334.468113ms","start":"2024-07-09T16:46:25.260564Z","end":"2024-07-09T16:46:25.595032Z","steps":["trace[1184459832] 'process raft request'  (duration: 319.259753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:25.595243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-09T16:46:25.260549Z","time spent":"334.609914ms","remote":"127.0.0.1:34966","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2085,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/hello-world-app-86c47465fc-6bxgb\" mod_revision:1816 > success:<request_put:<key:\"/registry/pods/default/hello-world-app-86c47465fc-6bxgb\" value_size:2022 >> failure:<request_range:<key:\"/registry/pods/default/hello-world-app-86c47465fc-6bxgb\" > >"}
	{"level":"info","ts":"2024-07-09T16:46:25.835139Z","caller":"traceutil/trace.go:171","msg":"trace[1463777170] transaction","detail":"{read_only:false; response_revision:1821; number_of_response:1; }","duration":"238.645237ms","start":"2024-07-09T16:46:25.596442Z","end":"2024-07-09T16:46:25.835087Z","steps":["trace[1463777170] 'process raft request'  (duration: 227.829194ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:46:25.840354Z","caller":"traceutil/trace.go:171","msg":"trace[162404473] linearizableReadLoop","detail":"{readStateIndex:1915; appliedIndex:1913; }","duration":"222.999775ms","start":"2024-07-09T16:46:25.617345Z","end":"2024-07-09T16:46:25.840345Z","steps":["trace[162404473] 'read index received'  (duration: 206.971213ms)","trace[162404473] 'applied index is now lower than readState.Index'  (duration: 16.027862ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-09T16:46:25.841397Z","caller":"traceutil/trace.go:171","msg":"trace[560921148] transaction","detail":"{read_only:false; response_revision:1822; number_of_response:1; }","duration":"240.929347ms","start":"2024-07-09T16:46:25.600458Z","end":"2024-07-09T16:46:25.841387Z","steps":["trace[560921148] 'process raft request'  (duration: 239.707742ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:46:25.8416Z","caller":"traceutil/trace.go:171","msg":"trace[581329928] transaction","detail":"{read_only:false; response_revision:1823; number_of_response:1; }","duration":"222.035972ms","start":"2024-07-09T16:46:25.619537Z","end":"2024-07-09T16:46:25.841573Z","steps":["trace[581329928] 'process raft request'  (duration: 220.684067ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:46:25.841905Z","caller":"traceutil/trace.go:171","msg":"trace[453626018] transaction","detail":"{read_only:false; response_revision:1824; number_of_response:1; }","duration":"205.403007ms","start":"2024-07-09T16:46:25.636493Z","end":"2024-07-09T16:46:25.841896Z","steps":["trace[453626018] 'process raft request'  (duration: 203.7952ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:46:25.842241Z","caller":"traceutil/trace.go:171","msg":"trace[1540796980] transaction","detail":"{read_only:false; response_revision:1825; number_of_response:1; }","duration":"185.85793ms","start":"2024-07-09T16:46:25.65637Z","end":"2024-07-09T16:46:25.842228Z","steps":["trace[1540796980] 'process raft request'  (duration: 183.950022ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:25.842428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.067984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/hello-world-app-86c47465fc-6bxgb\" ","response":"range_response_count:1 size:2100"}
	{"level":"info","ts":"2024-07-09T16:46:25.842455Z","caller":"traceutil/trace.go:171","msg":"trace[41254239] range","detail":"{range_begin:/registry/pods/default/hello-world-app-86c47465fc-6bxgb; range_end:; response_count:1; response_revision:1825; }","duration":"225.131284ms","start":"2024-07-09T16:46:25.617317Z","end":"2024-07-09T16:46:25.842448Z","steps":["trace[41254239] 'agreement among raft nodes before linearized reading'  (duration: 225.029383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:25.845906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.581393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/hello-world-app-86c47465fc\" ","response":"range_response_count:1 size:1906"}
	{"level":"info","ts":"2024-07-09T16:46:25.84634Z","caller":"traceutil/trace.go:171","msg":"trace[1039676163] range","detail":"{range_begin:/registry/replicasets/default/hello-world-app-86c47465fc; range_end:; response_count:1; response_revision:1825; }","duration":"228.030195ms","start":"2024-07-09T16:46:25.618299Z","end":"2024-07-09T16:46:25.846329Z","steps":["trace[1039676163] 'agreement among raft nodes before linearized reading'  (duration: 227.533993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:25.86352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.085197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-09T16:46:25.863583Z","caller":"traceutil/trace.go:171","msg":"trace[2125786007] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotcontents0; response_count:0; response_revision:1825; }","duration":"152.201698ms","start":"2024-07-09T16:46:25.711371Z","end":"2024-07-09T16:46:25.863573Z","steps":["trace[2125786007] 'agreement among raft nodes before linearized reading'  (duration: 152.061797ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T16:46:26.58757Z","caller":"traceutil/trace.go:171","msg":"trace[815116498] transaction","detail":"{read_only:false; response_revision:1835; number_of_response:1; }","duration":"124.621189ms","start":"2024-07-09T16:46:26.46293Z","end":"2024-07-09T16:46:26.587551Z","steps":["trace[815116498] 'process raft request'  (duration: 124.460988ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T16:46:26.77825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.103194ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-09T16:46:26.791288Z","caller":"traceutil/trace.go:171","msg":"trace[1025554510] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1835; }","duration":"139.176646ms","start":"2024-07-09T16:46:26.652087Z","end":"2024-07-09T16:46:26.791264Z","steps":["trace[1025554510] 'range keys from in-memory index tree'  (duration: 125.987294ms)"],"step_count":1}
	
	
	==> gcp-auth [8a2a065fdd3a] <==
	2024/07/09 16:45:29 GCP Auth Webhook started!
	2024/07/09 16:45:35 Ready to marshal response ...
	2024/07/09 16:45:35 Ready to write response ...
	2024/07/09 16:45:40 Ready to marshal response ...
	2024/07/09 16:45:40 Ready to write response ...
	2024/07/09 16:45:46 Ready to marshal response ...
	2024/07/09 16:45:46 Ready to write response ...
	2024/07/09 16:45:46 Ready to marshal response ...
	2024/07/09 16:45:46 Ready to write response ...
	2024/07/09 16:45:46 Ready to marshal response ...
	2024/07/09 16:45:46 Ready to write response ...
	2024/07/09 16:45:59 Ready to marshal response ...
	2024/07/09 16:45:59 Ready to write response ...
	2024/07/09 16:46:12 Ready to marshal response ...
	2024/07/09 16:46:12 Ready to write response ...
	2024/07/09 16:46:25 Ready to marshal response ...
	2024/07/09 16:46:25 Ready to write response ...
	
	
	==> kernel <==
	 16:46:28 up 7 min,  0 users,  load average: 2.66, 2.23, 1.10
	Linux addons-291800 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [36f98def341e] <==
	Trace[1609950465]: ["GuaranteedUpdate etcd3" audit-id:a4df1a70-0029-4a6a-95db-9e8f97ab3e2c,key:/services/endpoints/kube-system/tiller-deploy,type:*core.Endpoints,resource:endpoints 810ms (16:45:58.483)
	Trace[1609950465]:  ---"Txn call completed" 759ms (16:45:59.243)]
	Trace[1609950465]: ---"Write to database call succeeded" len:233 41ms (16:45:59.286)
	Trace[1609950465]: [803.004817ms] [803.004817ms] END
	I0709 16:45:59.294833       1 trace.go:236] Trace[66538718]: "List" accept:application/json, */*,audit-id:bfa0bf72-5dd4-4d6f-b252-0d87c063a863,client:172.18.192.1,api-group:,api-version:v1,name:,subresource:,namespace:headlamp,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/headlamp/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (09-Jul-2024 16:45:58.684) (total time: 610ms):
	Trace[66538718]: ["List(recursive=true) etcd3" audit-id:bfa0bf72-5dd4-4d6f-b252-0d87c063a863,key:/pods/headlamp,resourceVersion:,resourceVersionMatch:,limit:0,continue: 610ms (16:45:58.684)]
	Trace[66538718]: ---"Writing http response done" count:1 51ms (16:45:59.294)
	Trace[66538718]: [610.226996ms] [610.226996ms] END
	I0709 16:45:59.295044       1 trace.go:236] Trace[772865213]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:8827326f-fc96-443b-81b0-d0d3a2e322bd,client:172.18.206.170,api-group:discovery.k8s.io,api-version:v1,name:tiller-deploy-ql47v,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpointslices,scope:resource,url:/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/tiller-deploy-ql47v,user-agent:kube-controller-manager/v1.30.2 (linux/amd64) kubernetes/3968350/system:serviceaccount:kube-system:endpointslice-controller,verb:PUT (09-Jul-2024 16:45:58.512) (total time: 732ms):
	Trace[772865213]: ["GuaranteedUpdate etcd3" audit-id:8827326f-fc96-443b-81b0-d0d3a2e322bd,key:/endpointslices/kube-system/tiller-deploy-ql47v,type:*discovery.EndpointSlice,resource:endpointslices.discovery.k8s.io 732ms (16:45:58.512)
	Trace[772865213]:  ---"Txn call completed" 728ms (16:45:59.244)]
	Trace[772865213]: [732.271916ms] [732.271916ms] END
	I0709 16:45:59.353518       1 trace.go:236] Trace[366164835]: "Patch" accept:application/json, */*,audit-id:34047284-e9dc-4e9d-bc6e-9fe755fb8ebb,client:10.244.0.22,api-group:,api-version:v1,name:ingress-nginx-controller-768f948f8f-s682g.17e0992caa369113,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/ingress-nginx/events/ingress-nginx-controller-768f948f8f-s682g.17e0992caa369113,user-agent:nginx-ingress-controller/v1.10.1 (linux/amd64) ingress-nginx/4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518,verb:PATCH (09-Jul-2024 16:45:58.681) (total time: 671ms):
	Trace[366164835]: ["GuaranteedUpdate etcd3" audit-id:34047284-e9dc-4e9d-bc6e-9fe755fb8ebb,key:/events/ingress-nginx/ingress-nginx-controller-768f948f8f-s682g.17e0992caa369113,type:*core.Event,resource:events 668ms (16:45:58.684)
	Trace[366164835]:  ---"initial value restored" 603ms (16:45:59.288)
	Trace[366164835]:  ---"Txn call completed" 64ms (16:45:59.353)]
	Trace[366164835]: ---"Object stored in database" 64ms (16:45:59.353)
	Trace[366164835]: [671.576556ms] [671.576556ms] END
	I0709 16:45:59.700082       1 trace.go:236] Trace[1002803607]: "Delete" accept:application/json,audit-id:62ce95bf-f28a-4b28-bca7-7a05f88f0ac7,client:127.0.0.1,api-group:,api-version:v1,name:tiller-deploy,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:services,scope:resource,url:/api/v1/namespaces/kube-system/services/tiller-deploy,user-agent:kubectl/v1.30.2 (linux/amd64) kubernetes/3968350,verb:DELETE (09-Jul-2024 16:45:58.437) (total time: 1262ms):
	Trace[1002803607]: ---"Object deleted from database" 1262ms (16:45:59.699)
	Trace[1002803607]: [1.262738271s] [1.262738271s] END
	I0709 16:46:00.566164       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.25.237"}
	I0709 16:46:25.577753       1 trace.go:236] Trace[1132871946]: "Create" accept:application/json,audit-id:53539ee3-9dde-48ee-8aa2-a01d7019d034,client:172.18.192.1,api-group:networking.k8s.io,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:ingresses,scope:resource,url:/apis/networking.k8s.io/v1/namespaces/kube-system/ingresses,user-agent:kubectl/v1.30.2 (windows/amd64) kubernetes/3968350,verb:POST (09-Jul-2024 16:46:24.819) (total time: 758ms):
	Trace[1132871946]: [758.105978ms] [758.105978ms] END
	I0709 16:46:26.023716       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.137.72"}
	
	
	==> kube-controller-manager [bdb1112e8b27] <==
	I0709 16:45:46.796935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="165.108145ms"
	I0709 16:45:46.826768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="29.715098ms"
	I0709 16:45:46.827576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="133.5µs"
	E0709 16:45:53.470755       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:45:54.538294       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:45:54.538437       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0709 16:45:57.469327       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:45:57.469408       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0709 16:45:58.413665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="5.1µs"
	I0709 16:45:58.532534       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0709 16:45:58.532588       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 16:45:58.580150       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0709 16:45:58.580376       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 16:46:01.602692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="49.1µs"
	I0709 16:46:01.702873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="41.123998ms"
	I0709 16:46:01.703103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="67.8µs"
	W0709 16:46:02.284101       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:46:02.284217       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0709 16:46:02.537578       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0709 16:46:04.016066       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="20.1µs"
	W0709 16:46:12.787096       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0709 16:46:12.787141       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0709 16:46:25.610104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="617.759226ms"
	I0709 16:46:25.964711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="354.273591ms"
	I0709 16:46:25.964783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="35.7µs"
	
	
	==> kube-proxy [34adb0c36293] <==
	I0709 16:41:36.202108       1 server_linux.go:69] "Using iptables proxy"
	I0709 16:41:36.413169       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.170"]
	I0709 16:41:37.063298       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 16:41:37.063707       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 16:41:37.063752       1 server_linux.go:165] "Using iptables Proxier"
	I0709 16:41:37.107957       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 16:41:37.108416       1 server.go:872] "Version info" version="v1.30.2"
	I0709 16:41:37.108442       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 16:41:37.110925       1 config.go:192] "Starting service config controller"
	I0709 16:41:37.110957       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 16:41:37.111000       1 config.go:101] "Starting endpoint slice config controller"
	I0709 16:41:37.111016       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 16:41:37.112107       1 config.go:319] "Starting node config controller"
	I0709 16:41:37.112135       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 16:41:37.212287       1 shared_informer.go:320] Caches are synced for node config
	I0709 16:41:37.212329       1 shared_informer.go:320] Caches are synced for service config
	I0709 16:41:37.212349       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [914307c141b3] <==
	W0709 16:41:11.097266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 16:41:11.097786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 16:41:11.226085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0709 16:41:11.226356       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0709 16:41:11.308717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0709 16:41:11.309157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0709 16:41:11.332685       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0709 16:41:11.333258       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0709 16:41:11.391848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 16:41:11.392242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 16:41:11.449666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 16:41:11.449931       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 16:41:11.491693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 16:41:11.492001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 16:41:11.542843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 16:41:11.543191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0709 16:41:11.544713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 16:41:11.544959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 16:41:11.578287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 16:41:11.578562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 16:41:11.629275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 16:41:11.629376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 16:41:11.630683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 16:41:11.630742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0709 16:41:13.079786       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 16:46:12 addons-291800 kubelet[2285]: E0709 16:46:12.263315    2285 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc86917c-2fa7-44ed-8c10-fece12c6bff0" containerName="registry"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: E0709 16:46:12.263326    2285 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7cc47a1f-99e9-4e27-98d3-227f52119db5" containerName="tiller"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: E0709 16:46:12.263334    2285 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bab934ae-9fd8-4c82-a9c1-9060abb4bd5e" containerName="registry-proxy"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: E0709 16:46:12.263354    2285 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e616dbb-0da7-4289-b55b-e804fbc7a803" containerName="gadget"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.263429    2285 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e616dbb-0da7-4289-b55b-e804fbc7a803" containerName="gadget"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.263457    2285 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc47a1f-99e9-4e27-98d3-227f52119db5" containerName="tiller"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.263535    2285 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab934ae-9fd8-4c82-a9c1-9060abb4bd5e" containerName="registry-proxy"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.263552    2285 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc86917c-2fa7-44ed-8c10-fece12c6bff0" containerName="registry"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.344779    2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbdr\" (UniqueName: \"kubernetes.io/projected/7f97b295-a1aa-4777-8329-5a08d00c2945-kube-api-access-lsbdr\") pod \"task-pv-pod\" (UID: \"7f97b295-a1aa-4777-8329-5a08d00c2945\") " pod="default/task-pv-pod"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.345057    2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77b334b6-16c0-4e5b-b4f7-ce44e7af576e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^bf36b918-3e12-11ef-8503-d2c29a95ed16\") pod \"task-pv-pod\" (UID: \"7f97b295-a1aa-4777-8329-5a08d00c2945\") " pod="default/task-pv-pod"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.345195    2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7f97b295-a1aa-4777-8329-5a08d00c2945-gcp-creds\") pod \"task-pv-pod\" (UID: \"7f97b295-a1aa-4777-8329-5a08d00c2945\") " pod="default/task-pv-pod"
	Jul 09 16:46:12 addons-291800 kubelet[2285]: I0709 16:46:12.464746    2285 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-77b334b6-16c0-4e5b-b4f7-ce44e7af576e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^bf36b918-3e12-11ef-8503-d2c29a95ed16\") pod \"task-pv-pod\" (UID: \"7f97b295-a1aa-4777-8329-5a08d00c2945\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/e59e469568b3e4430931f644dce5443f6766dc481b5ebdc561f4a039de3a03d2/globalmount\"" pod="default/task-pv-pod"
	Jul 09 16:46:13 addons-291800 kubelet[2285]: E0709 16:46:13.336090    2285 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 16:46:13 addons-291800 kubelet[2285]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 16:46:13 addons-291800 kubelet[2285]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 16:46:13 addons-291800 kubelet[2285]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 16:46:13 addons-291800 kubelet[2285]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 16:46:13 addons-291800 kubelet[2285]: I0709 16:46:13.484912    2285 scope.go:117] "RemoveContainer" containerID="18327e3e2772de3ff4810026e525eaf909e9d6c76a4c0b508b537216ffcf56e0"
	Jul 09 16:46:13 addons-291800 kubelet[2285]: I0709 16:46:13.543961    2285 scope.go:117] "RemoveContainer" containerID="8e319b25e8717fab8a4e948ed4973882a0b00b6e39344b2de3f6ab0ab363b346"
	Jul 09 16:46:13 addons-291800 kubelet[2285]: I0709 16:46:13.582703    2285 scope.go:117] "RemoveContainer" containerID="8a373c794473091afc618d7392a30d6c6ac5f0ef8e1e280f4bcaf9468c0569ae"
	Jul 09 16:46:13 addons-291800 kubelet[2285]: I0709 16:46:13.631661    2285 scope.go:117] "RemoveContainer" containerID="256c83a28c1856c4a474ab75cf10d34a0d2c49a087fae9072b437168f3c46f49"
	Jul 09 16:46:25 addons-291800 kubelet[2285]: I0709 16:46:25.605556    2285 topology_manager.go:215] "Topology Admit Handler" podUID="ab8ce3a4-6ca1-4c33-962c-e20a09bf6410" podNamespace="default" podName="hello-world-app-86c47465fc-6bxgb"
	Jul 09 16:46:25 addons-291800 kubelet[2285]: I0709 16:46:25.715453    2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6zc4\" (UniqueName: \"kubernetes.io/projected/ab8ce3a4-6ca1-4c33-962c-e20a09bf6410-kube-api-access-h6zc4\") pod \"hello-world-app-86c47465fc-6bxgb\" (UID: \"ab8ce3a4-6ca1-4c33-962c-e20a09bf6410\") " pod="default/hello-world-app-86c47465fc-6bxgb"
	Jul 09 16:46:25 addons-291800 kubelet[2285]: I0709 16:46:25.715631    2285 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ab8ce3a4-6ca1-4c33-962c-e20a09bf6410-gcp-creds\") pod \"hello-world-app-86c47465fc-6bxgb\" (UID: \"ab8ce3a4-6ca1-4c33-962c-e20a09bf6410\") " pod="default/hello-world-app-86c47465fc-6bxgb"
	Jul 09 16:46:28 addons-291800 kubelet[2285]: I0709 16:46:28.042743    2285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b60efef76a36fedefeda9034517ebbcda0c0bc49a01deda9d8e26125dc26ea"
	
	
	==> storage-provisioner [825df3afa4a1] <==
	I0709 16:42:06.651668       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0709 16:42:06.782215       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0709 16:42:06.785682       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0709 16:42:06.816523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0709 16:42:06.816764       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-291800_36bd679a-963a-4a28-aa5f-b7c40e132b4f!
	I0709 16:42:06.817038       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"953a5670-8a28-453f-b476-a80b5c230322", APIVersion:"v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-291800_36bd679a-963a-4a28-aa5f-b7c40e132b4f became leader
	I0709 16:42:06.919680       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-291800_36bd679a-963a-4a28-aa5f-b7c40e132b4f!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 09:46:17.801556   13476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-291800 -n addons-291800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-291800 -n addons-291800: (12.8148324s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-291800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zqr26 ingress-nginx-admission-patch-zxm8c volcano-admission-init-cq57x
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-291800 describe pod ingress-nginx-admission-create-zqr26 ingress-nginx-admission-patch-zxm8c volcano-admission-init-cq57x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-291800 describe pod ingress-nginx-admission-create-zqr26 ingress-nginx-admission-patch-zxm8c volcano-admission-init-cq57x: exit status 1 (186.258ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zqr26" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zxm8c" not found
	Error from server (NotFound): pods "volcano-admission-init-cq57x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-291800 describe pod ingress-nginx-admission-create-zqr26 ingress-nginx-admission-patch-zxm8c volcano-admission-init-cq57x: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.95s)

                                                
                                    
x
+
TestErrorSpam/setup (197.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-783300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-783300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 --driver=hyperv: (3m17.0741397s)
error_spam_test.go:96: unexpected stderr: "W0709 09:50:46.984680    3600 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-783300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=19199
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-783300" primary control-plane node in "nospam-783300" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-783300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0709 09:50:46.984680    3600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (197.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (33.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-779900 -n functional-779900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-779900 -n functional-779900: (11.5869356s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 logs -n 25: (8.4537132s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-783300 --log_dir                                     | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:55 PDT | 09 Jul 24 09:55 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-783300 --log_dir                                     | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:55 PDT | 09 Jul 24 09:55 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-783300 --log_dir                                     | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:55 PDT | 09 Jul 24 09:55 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-783300 --log_dir                                     | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:55 PDT | 09 Jul 24 09:55 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-783300 --log_dir                                     | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:55 PDT | 09 Jul 24 09:56 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-783300 --log_dir                                     | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:56 PDT | 09 Jul 24 09:56 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-783300 --log_dir                                     | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:56 PDT | 09 Jul 24 09:56 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-783300                                            | nospam-783300     | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:56 PDT | 09 Jul 24 09:56 PDT |
	| start   | -p functional-779900                                        | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:56 PDT | 09 Jul 24 10:00 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-779900                                        | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:00 PDT | 09 Jul 24 10:02 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-779900 cache add                                 | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:02 PDT | 09 Jul 24 10:03 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-779900 cache add                                 | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-779900 cache add                                 | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-779900 cache add                                 | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	|         | minikube-local-cache-test:functional-779900                 |                   |                   |         |                     |                     |
	| cache   | functional-779900 cache delete                              | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	|         | minikube-local-cache-test:functional-779900                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	| ssh     | functional-779900 ssh sudo                                  | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-779900                                           | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:03 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-779900 ssh                                       | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-779900 cache reload                              | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:03 PDT | 09 Jul 24 10:04 PDT |
	| ssh     | functional-779900 ssh                                       | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:04 PDT | 09 Jul 24 10:04 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:04 PDT | 09 Jul 24 10:04 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:04 PDT | 09 Jul 24 10:04 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-779900 kubectl --                                | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:04 PDT | 09 Jul 24 10:04 PDT |
	|         | --context functional-779900                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 10:00:47
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 10:00:47.620531    4756 out.go:291] Setting OutFile to fd 896 ...
	I0709 10:00:47.621392    4756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:00:47.621392    4756 out.go:304] Setting ErrFile to fd 748...
	I0709 10:00:47.621392    4756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:00:47.643825    4756 out.go:298] Setting JSON to false
	I0709 10:00:47.648214    4756 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2716,"bootTime":1720541731,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 10:00:47.648214    4756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 10:00:47.649491    4756 out.go:177] * [functional-779900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 10:00:47.656830    4756 notify.go:220] Checking for updates...
	I0709 10:00:47.659188    4756 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:00:47.661951    4756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 10:00:47.664762    4756 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 10:00:47.667985    4756 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 10:00:47.670579    4756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 10:00:47.672752    4756 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:00:47.672752    4756 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 10:00:52.756971    4756 out.go:177] * Using the hyperv driver based on existing profile
	I0709 10:00:52.758895    4756 start.go:297] selected driver: hyperv
	I0709 10:00:52.760915    4756 start.go:901] validating driver "hyperv" against &{Name:functional-779900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:functional-779900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.200.147 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:00:52.760915    4756 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 10:00:52.815937    4756 cni.go:84] Creating CNI manager for ""
	I0709 10:00:52.815937    4756 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 10:00:52.815937    4756 start.go:340] cluster config:
	{Name:functional-779900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-779900 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.200.147 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:00:52.816795    4756 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 10:00:52.821714    4756 out.go:177] * Starting "functional-779900" primary control-plane node in "functional-779900" cluster
	I0709 10:00:52.824138    4756 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:00:52.824138    4756 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 10:00:52.824138    4756 cache.go:56] Caching tarball of preloaded images
	I0709 10:00:52.824970    4756 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 10:00:52.824970    4756 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 10:00:52.824970    4756 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\config.json ...
	I0709 10:00:52.826978    4756 start.go:360] acquireMachinesLock for functional-779900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 10:00:52.826978    4756 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-779900"
	I0709 10:00:52.826978    4756 start.go:96] Skipping create...Using existing machine configuration
	I0709 10:00:52.826978    4756 fix.go:54] fixHost starting: 
	I0709 10:00:52.826978    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:00:55.502177    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:00:55.502177    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:00:55.502177    4756 fix.go:112] recreateIfNeeded on functional-779900: state=Running err=<nil>
	W0709 10:00:55.502177    4756 fix.go:138] unexpected machine state, will restart: <nil>
	I0709 10:00:55.506474    4756 out.go:177] * Updating the running hyperv "functional-779900" VM ...
	I0709 10:00:55.508872    4756 machine.go:94] provisionDockerMachine start ...
	I0709 10:00:55.508965    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:00:57.619390    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:00:57.619390    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:00:57.619390    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:00.115832    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:00.115832    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:00.135315    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:00.135315    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:00.135315    4756 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 10:01:00.277068    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-779900
	
	I0709 10:01:00.277068    4756 buildroot.go:166] provisioning hostname "functional-779900"
	I0709 10:01:00.277068    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:02.369883    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:02.369883    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:02.369883    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:04.929143    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:04.929143    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:04.946690    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:04.947373    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:04.947434    4756 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-779900 && echo "functional-779900" | sudo tee /etc/hostname
	I0709 10:01:05.109807    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-779900
	
	I0709 10:01:05.109910    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:07.251366    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:07.263364    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:07.263364    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:09.834958    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:09.834958    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:09.841601    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:09.842400    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:09.842400    4756 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-779900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-779900/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-779900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 10:01:09.979183    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:01:09.979183    4756 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 10:01:09.979183    4756 buildroot.go:174] setting up certificates
	I0709 10:01:09.979183    4756 provision.go:84] configureAuth start
	I0709 10:01:09.979183    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:12.210951    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:12.210951    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:12.210951    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:14.800320    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:14.803847    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:14.803847    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:16.984371    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:16.984371    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:16.995608    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:19.565534    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:19.577386    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:19.577386    4756 provision.go:143] copyHostCerts
	I0709 10:01:19.577386    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 10:01:19.577386    4756 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 10:01:19.577386    4756 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 10:01:19.578386    4756 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 10:01:19.579416    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 10:01:19.579637    4756 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 10:01:19.579637    4756 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 10:01:19.580193    4756 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 10:01:19.581306    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 10:01:19.581306    4756 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 10:01:19.581306    4756 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 10:01:19.581975    4756 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 10:01:19.582852    4756 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-779900 san=[127.0.0.1 172.18.200.147 functional-779900 localhost minikube]
	I0709 10:01:19.903498    4756 provision.go:177] copyRemoteCerts
	I0709 10:01:19.917485    4756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 10:01:19.917485    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:22.054925    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:22.054925    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:22.064682    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:24.568474    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:24.568474    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:24.568474    4756 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
	I0709 10:01:24.675615    4756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7581213s)
	I0709 10:01:24.675615    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 10:01:24.676367    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 10:01:24.719706    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 10:01:24.720298    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0709 10:01:24.765643    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 10:01:24.766183    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 10:01:24.808901    4756 provision.go:87] duration metric: took 14.8296914s to configureAuth
	I0709 10:01:24.808901    4756 buildroot.go:189] setting minikube options for container-runtime
	I0709 10:01:24.809711    4756 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:01:24.809711    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:26.884535    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:26.885057    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:26.885057    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:29.380099    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:29.380364    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:29.386264    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:29.386986    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:29.386986    4756 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 10:01:29.519654    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 10:01:29.519654    4756 buildroot.go:70] root file system type: tmpfs
	I0709 10:01:29.520247    4756 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 10:01:29.520247    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:31.652845    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:31.652845    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:31.652845    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:34.132303    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:34.132303    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:34.149143    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:34.149760    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:34.149909    4756 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 10:01:34.304043    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 10:01:34.304171    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:36.388537    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:36.403163    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:36.403367    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:38.944277    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:38.944277    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:38.950420    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:38.950616    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:38.951146    4756 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 10:01:39.092666    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:01:39.092666    4756 machine.go:97] duration metric: took 43.5837161s to provisionDockerMachine
	I0709 10:01:39.092666    4756 start.go:293] postStartSetup for "functional-779900" (driver="hyperv")
	I0709 10:01:39.092666    4756 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 10:01:39.104501    4756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 10:01:39.104501    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:41.221489    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:41.232837    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:41.232837    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:43.779956    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:43.791219    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:43.791219    4756 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
	I0709 10:01:43.897105    4756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7925417s)
	I0709 10:01:43.910356    4756 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 10:01:43.916764    4756 command_runner.go:130] > NAME=Buildroot
	I0709 10:01:43.916764    4756 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 10:01:43.916764    4756 command_runner.go:130] > ID=buildroot
	I0709 10:01:43.916764    4756 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 10:01:43.916764    4756 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 10:01:43.916764    4756 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 10:01:43.916764    4756 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 10:01:43.917771    4756 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 10:01:43.918352    4756 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 10:01:43.918352    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 10:01:43.920005    4756 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\15032\hosts -> hosts in /etc/test/nested/copy/15032
	I0709 10:01:43.920005    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\15032\hosts -> /etc/test/nested/copy/15032/hosts
	I0709 10:01:43.931859    4756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/15032
	I0709 10:01:43.951643    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 10:01:44.003495    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\15032\hosts --> /etc/test/nested/copy/15032/hosts (40 bytes)
	I0709 10:01:44.049867    4756 start.go:296] duration metric: took 4.9571913s for postStartSetup
	I0709 10:01:44.049867    4756 fix.go:56] duration metric: took 51.2227966s for fixHost
	I0709 10:01:44.049867    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:46.124648    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:46.124648    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:46.136071    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:48.625350    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:48.636624    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:48.642406    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:48.642936    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:48.643112    4756 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 10:01:48.771666    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720544508.778477518
	
	I0709 10:01:48.772269    4756 fix.go:216] guest clock: 1720544508.778477518
	I0709 10:01:48.772269    4756 fix.go:229] Guest: 2024-07-09 10:01:48.778477518 -0700 PDT Remote: 2024-07-09 10:01:44.049867 -0700 PDT m=+56.528709801 (delta=4.728610518s)
	I0709 10:01:48.772386    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:50.933685    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:50.933685    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:50.943896    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:53.426691    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:53.426786    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:53.431339    4756 main.go:141] libmachine: Using SSH client type: native
	I0709 10:01:53.432055    4756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.200.147 22 <nil> <nil>}
	I0709 10:01:53.432055    4756 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720544508
	I0709 10:01:53.574408    4756 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 17:01:48 UTC 2024
	
	I0709 10:01:53.574472    4756 fix.go:236] clock set: Tue Jul  9 17:01:48 UTC 2024
	 (err=<nil>)
	I0709 10:01:53.574472    4756 start.go:83] releasing machines lock for "functional-779900", held for 1m0.747385s
	I0709 10:01:53.574761    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:55.684548    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:01:55.695826    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:55.695826    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:01:58.230145    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:01:58.230145    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:01:58.234766    4756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 10:01:58.234766    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:01:58.244983    4756 ssh_runner.go:195] Run: cat /version.json
	I0709 10:01:58.244983    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:02:00.437016    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:02:00.442497    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:00.442792    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:02:00.449294    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:02:00.449294    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:00.449294    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:02:03.126401    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:02:03.138701    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:03.138701    4756 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
	I0709 10:02:03.163416    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:02:03.163416    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:03.163618    4756 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
	I0709 10:02:03.231227    4756 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 10:02:03.231314    4756 ssh_runner.go:235] Completed: cat /version.json: (4.986322s)
	I0709 10:02:03.245348    4756 ssh_runner.go:195] Run: systemctl --version
	I0709 10:02:03.319019    4756 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 10:02:03.319019    4756 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0842436s)
	I0709 10:02:03.319019    4756 command_runner.go:130] > systemd 252 (252)
	I0709 10:02:03.319019    4756 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 10:02:03.333240    4756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 10:02:03.338630    4756 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 10:02:03.344538    4756 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 10:02:03.355906    4756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 10:02:03.375097    4756 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0709 10:02:03.375097    4756 start.go:494] detecting cgroup driver to use...
	I0709 10:02:03.375097    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:02:03.409558    4756 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 10:02:03.425286    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 10:02:03.454824    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 10:02:03.477805    4756 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 10:02:03.491915    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 10:02:03.522570    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:02:03.553764    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 10:02:03.594799    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:02:03.631921    4756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 10:02:03.664803    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 10:02:03.699224    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 10:02:03.733576    4756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 10:02:03.770361    4756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 10:02:03.787527    4756 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 10:02:03.802246    4756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 10:02:03.833051    4756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:02:04.094635    4756 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 10:02:04.124022    4756 start.go:494] detecting cgroup driver to use...
	I0709 10:02:04.136463    4756 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 10:02:04.159603    4756 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 10:02:04.159603    4756 command_runner.go:130] > [Unit]
	I0709 10:02:04.159670    4756 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 10:02:04.159670    4756 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 10:02:04.159670    4756 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 10:02:04.159670    4756 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 10:02:04.159670    4756 command_runner.go:130] > StartLimitBurst=3
	I0709 10:02:04.159670    4756 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 10:02:04.159670    4756 command_runner.go:130] > [Service]
	I0709 10:02:04.159748    4756 command_runner.go:130] > Type=notify
	I0709 10:02:04.159748    4756 command_runner.go:130] > Restart=on-failure
	I0709 10:02:04.159748    4756 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 10:02:04.159748    4756 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 10:02:04.159828    4756 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 10:02:04.159828    4756 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 10:02:04.159858    4756 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 10:02:04.159858    4756 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 10:02:04.159858    4756 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 10:02:04.159858    4756 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 10:02:04.159858    4756 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 10:02:04.159858    4756 command_runner.go:130] > ExecStart=
	I0709 10:02:04.159858    4756 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 10:02:04.159858    4756 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 10:02:04.159858    4756 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 10:02:04.159858    4756 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 10:02:04.159858    4756 command_runner.go:130] > LimitNOFILE=infinity
	I0709 10:02:04.159858    4756 command_runner.go:130] > LimitNPROC=infinity
	I0709 10:02:04.159858    4756 command_runner.go:130] > LimitCORE=infinity
	I0709 10:02:04.159858    4756 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 10:02:04.159858    4756 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 10:02:04.159858    4756 command_runner.go:130] > TasksMax=infinity
	I0709 10:02:04.159858    4756 command_runner.go:130] > TimeoutStartSec=0
	I0709 10:02:04.159858    4756 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 10:02:04.159858    4756 command_runner.go:130] > Delegate=yes
	I0709 10:02:04.159858    4756 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 10:02:04.159858    4756 command_runner.go:130] > KillMode=process
	I0709 10:02:04.159858    4756 command_runner.go:130] > [Install]
	I0709 10:02:04.159858    4756 command_runner.go:130] > WantedBy=multi-user.target
	I0709 10:02:04.172062    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:02:04.203660    4756 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 10:02:04.239152    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:02:04.279228    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:02:04.302143    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:02:04.345360    4756 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 10:02:04.354856    4756 ssh_runner.go:195] Run: which cri-dockerd
	I0709 10:02:04.365295    4756 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 10:02:04.378302    4756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 10:02:04.395830    4756 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 10:02:04.441767    4756 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 10:02:04.713139    4756 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 10:02:04.942002    4756 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 10:02:04.942364    4756 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 10:02:04.992752    4756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:02:05.249952    4756 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:02:18.176122    4756 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9261472s)
	I0709 10:02:18.188547    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 10:02:18.228219    4756 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0709 10:02:18.283412    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:02:18.322311    4756 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 10:02:18.535839    4756 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 10:02:18.739439    4756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:02:18.924113    4756 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 10:02:18.962611    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:02:19.002011    4756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:02:19.218500    4756 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 10:02:19.345021    4756 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 10:02:19.360017    4756 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 10:02:19.367969    4756 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 10:02:19.368008    4756 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 10:02:19.368008    4756 command_runner.go:130] > Device: 0,22	Inode: 1492        Links: 1
	I0709 10:02:19.368008    4756 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 10:02:19.368008    4756 command_runner.go:130] > Access: 2024-07-09 17:02:19.278293622 +0000
	I0709 10:02:19.368008    4756 command_runner.go:130] > Modify: 2024-07-09 17:02:19.248293199 +0000
	I0709 10:02:19.368008    4756 command_runner.go:130] > Change: 2024-07-09 17:02:19.254293283 +0000
	I0709 10:02:19.368008    4756 command_runner.go:130] >  Birth: -
	I0709 10:02:19.368008    4756 start.go:562] Will wait 60s for crictl version
	I0709 10:02:19.380853    4756 ssh_runner.go:195] Run: which crictl
	I0709 10:02:19.386801    4756 command_runner.go:130] > /usr/bin/crictl
	I0709 10:02:19.397601    4756 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 10:02:19.448513    4756 command_runner.go:130] > Version:  0.1.0
	I0709 10:02:19.449878    4756 command_runner.go:130] > RuntimeName:  docker
	I0709 10:02:19.449878    4756 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 10:02:19.449878    4756 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 10:02:19.449878    4756 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 10:02:19.459658    4756 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:02:19.489380    4756 command_runner.go:130] > 27.0.3
	I0709 10:02:19.500324    4756 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:02:19.532309    4756 command_runner.go:130] > 27.0.3
	I0709 10:02:19.536742    4756 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 10:02:19.537111    4756 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 10:02:19.541551    4756 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 10:02:19.541551    4756 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 10:02:19.541551    4756 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 10:02:19.541551    4756 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 10:02:19.544558    4756 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 10:02:19.544558    4756 ip.go:210] interface addr: 172.18.192.1/20
	I0709 10:02:19.550254    4756 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 10:02:19.556871    4756 command_runner.go:130] > 172.18.192.1	host.minikube.internal
	I0709 10:02:19.561304    4756 kubeadm.go:877] updating cluster {Name:functional-779900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.2 ClusterName:functional-779900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.200.147 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 10:02:19.561545    4756 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:02:19.571458    4756 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 10:02:19.596396    4756 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 10:02:19.596396    4756 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 10:02:19.596396    4756 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 10:02:19.596396    4756 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 10:02:19.596396    4756 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 10:02:19.596396    4756 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 10:02:19.596396    4756 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 10:02:19.596396    4756 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 10:02:19.597302    4756 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 10:02:19.597426    4756 docker.go:615] Images already preloaded, skipping extraction
	I0709 10:02:19.609955    4756 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 10:02:19.639095    4756 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 10:02:19.640117    4756 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 10:02:19.640117    4756 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 10:02:19.640117    4756 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 10:02:19.640117    4756 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 10:02:19.640168    4756 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 10:02:19.640168    4756 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 10:02:19.640245    4756 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 10:02:19.640279    4756 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 10:02:19.640315    4756 cache_images.go:84] Images are preloaded, skipping loading
	I0709 10:02:19.640355    4756 kubeadm.go:928] updating node { 172.18.200.147 8441 v1.30.2 docker true true} ...
	I0709 10:02:19.640666    4756 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-779900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.200.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:functional-779900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 10:02:19.650199    4756 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 10:02:19.682264    4756 command_runner.go:130] > cgroupfs
	I0709 10:02:19.683369    4756 cni.go:84] Creating CNI manager for ""
	I0709 10:02:19.683445    4756 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 10:02:19.683491    4756 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 10:02:19.683632    4756 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.200.147 APIServerPort:8441 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-779900 NodeName:functional-779900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.200.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.200.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 10:02:19.684031    4756 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.200.147
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-779900"
	  kubeletExtraArgs:
	    node-ip: 172.18.200.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.200.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 10:02:19.695144    4756 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 10:02:19.708567    4756 command_runner.go:130] > kubeadm
	I0709 10:02:19.708567    4756 command_runner.go:130] > kubectl
	I0709 10:02:19.708567    4756 command_runner.go:130] > kubelet
	I0709 10:02:19.715652    4756 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 10:02:19.728921    4756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 10:02:19.748252    4756 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0709 10:02:19.778970    4756 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 10:02:19.806180    4756 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0709 10:02:19.847182    4756 ssh_runner.go:195] Run: grep 172.18.200.147	control-plane.minikube.internal$ /etc/hosts
	I0709 10:02:19.853788    4756 command_runner.go:130] > 172.18.200.147	control-plane.minikube.internal
	I0709 10:02:19.865438    4756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:02:20.076633    4756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:02:20.104605    4756 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900 for IP: 172.18.200.147
	I0709 10:02:20.104605    4756 certs.go:194] generating shared ca certs ...
	I0709 10:02:20.104605    4756 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:02:20.105575    4756 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 10:02:20.105846    4756 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 10:02:20.105846    4756 certs.go:256] generating profile certs ...
	I0709 10:02:20.106782    4756 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.key
	I0709 10:02:20.107683    4756 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\apiserver.key.5e1cb1ba
	I0709 10:02:20.108157    4756 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\proxy-client.key
	I0709 10:02:20.108301    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 10:02:20.108482    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 10:02:20.108695    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 10:02:20.108740    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 10:02:20.108740    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 10:02:20.108740    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 10:02:20.108740    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 10:02:20.109475    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 10:02:20.109475    4756 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 10:02:20.110153    4756 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 10:02:20.110153    4756 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 10:02:20.110153    4756 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 10:02:20.110767    4756 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 10:02:20.110767    4756 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 10:02:20.111333    4756 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 10:02:20.111333    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:02:20.111868    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 10:02:20.112126    4756 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 10:02:20.113769    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 10:02:20.159156    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 10:02:20.200867    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 10:02:20.247079    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 10:02:20.314021    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 10:02:20.425392    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 10:02:20.485843    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 10:02:20.543618    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0709 10:02:20.592347    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 10:02:20.641309    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 10:02:20.690218    4756 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 10:02:20.735456    4756 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 10:02:20.791913    4756 ssh_runner.go:195] Run: openssl version
	I0709 10:02:20.804691    4756 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 10:02:20.814802    4756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 10:02:20.852895    4756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 10:02:20.860102    4756 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:02:20.860102    4756 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:02:20.873370    4756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 10:02:20.880701    4756 command_runner.go:130] > 51391683
	I0709 10:02:20.895720    4756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 10:02:20.925230    4756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 10:02:20.959478    4756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 10:02:20.962359    4756 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:02:20.967114    4756 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:02:20.978359    4756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 10:02:20.982104    4756 command_runner.go:130] > 3ec20f2e
	I0709 10:02:21.008577    4756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 10:02:21.043146    4756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 10:02:21.084716    4756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:02:21.093773    4756 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:02:21.093813    4756 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:02:21.106652    4756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:02:21.115749    4756 command_runner.go:130] > b5213941
	I0709 10:02:21.128367    4756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 10:02:21.163551    4756 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:02:21.173527    4756 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:02:21.173527    4756 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0709 10:02:21.173527    4756 command_runner.go:130] > Device: 8,1	Inode: 3149138     Links: 1
	I0709 10:02:21.173527    4756 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 10:02:21.173527    4756 command_runner.go:130] > Access: 2024-07-09 16:59:48.949645688 +0000
	I0709 10:02:21.173527    4756 command_runner.go:130] > Modify: 2024-07-09 16:59:48.949645688 +0000
	I0709 10:02:21.173527    4756 command_runner.go:130] > Change: 2024-07-09 16:59:48.949645688 +0000
	I0709 10:02:21.173527    4756 command_runner.go:130] >  Birth: 2024-07-09 16:59:48.949645688 +0000
	I0709 10:02:21.188295    4756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0709 10:02:21.198924    4756 command_runner.go:130] > Certificate will not expire
	I0709 10:02:21.215142    4756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0709 10:02:21.218145    4756 command_runner.go:130] > Certificate will not expire
	I0709 10:02:21.237838    4756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0709 10:02:21.240038    4756 command_runner.go:130] > Certificate will not expire
	I0709 10:02:21.260282    4756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0709 10:02:21.273287    4756 command_runner.go:130] > Certificate will not expire
	I0709 10:02:21.287881    4756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0709 10:02:21.296194    4756 command_runner.go:130] > Certificate will not expire
	I0709 10:02:21.307111    4756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0709 10:02:21.310986    4756 command_runner.go:130] > Certificate will not expire
	I0709 10:02:21.318231    4756 kubeadm.go:391] StartCluster: {Name:functional-779900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:functional-779900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.200.147 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:02:21.328157    4756 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 10:02:21.382571    4756 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 10:02:21.419715    4756 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0709 10:02:21.419802    4756 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0709 10:02:21.419802    4756 command_runner.go:130] > /var/lib/minikube/etcd:
	I0709 10:02:21.419840    4756 command_runner.go:130] > member
	W0709 10:02:21.419874    4756 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0709 10:02:21.419874    4756 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0709 10:02:21.419874    4756 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0709 10:02:21.432931    4756 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0709 10:02:21.457974    4756 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0709 10:02:21.458596    4756 kubeconfig.go:125] found "functional-779900" server: "https://172.18.200.147:8441"
	I0709 10:02:21.459673    4756 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:02:21.461142    4756 kapi.go:59] client config for functional-779900: &rest.Config{Host:"https://172.18.200.147:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-779900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-779900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 10:02:21.463603    4756 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 10:02:21.477845    4756 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0709 10:02:21.508524    4756 kubeadm.go:624] The running cluster does not require reconfiguration: 172.18.200.147
	I0709 10:02:21.517378    4756 kubeadm.go:1154] stopping kube-system containers ...
	I0709 10:02:21.528130    4756 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 10:02:21.590726    4756 command_runner.go:130] > 27bf3f788b98
	I0709 10:02:21.591243    4756 command_runner.go:130] > 126d590bdbfd
	I0709 10:02:21.591243    4756 command_runner.go:130] > 9d2cb3cb7cbc
	I0709 10:02:21.591243    4756 command_runner.go:130] > a9d14b6f518e
	I0709 10:02:21.591286    4756 command_runner.go:130] > a6ab324aafde
	I0709 10:02:21.591286    4756 command_runner.go:130] > 3fcd6a9eb3d6
	I0709 10:02:21.591286    4756 command_runner.go:130] > 95d182c2aab4
	I0709 10:02:21.591286    4756 command_runner.go:130] > 5ceddb3777ed
	I0709 10:02:21.591286    4756 command_runner.go:130] > 4b0f2b8ca089
	I0709 10:02:21.591286    4756 command_runner.go:130] > ff1ad7499190
	I0709 10:02:21.591286    4756 command_runner.go:130] > 8c72b844d05a
	I0709 10:02:21.591351    4756 command_runner.go:130] > 365aedee09e0
	I0709 10:02:21.591403    4756 command_runner.go:130] > 04b318e9d326
	I0709 10:02:21.591403    4756 command_runner.go:130] > 1a41e4afc931
	I0709 10:02:21.591403    4756 command_runner.go:130] > b065282342c2
	I0709 10:02:21.591403    4756 command_runner.go:130] > d513d91e6c5b
	I0709 10:02:21.591403    4756 command_runner.go:130] > bb3b1bf6cdcf
	I0709 10:02:21.591403    4756 command_runner.go:130] > 35823a2c14a0
	I0709 10:02:21.591403    4756 command_runner.go:130] > 28a1e3c298d4
	I0709 10:02:21.591403    4756 command_runner.go:130] > 2e4623c32365
	I0709 10:02:21.591508    4756 command_runner.go:130] > acf01ceea742
	I0709 10:02:21.591599    4756 docker.go:483] Stopping containers: [27bf3f788b98 126d590bdbfd 9d2cb3cb7cbc a9d14b6f518e a6ab324aafde 3fcd6a9eb3d6 95d182c2aab4 5ceddb3777ed 4b0f2b8ca089 ff1ad7499190 8c72b844d05a 365aedee09e0 04b318e9d326 1a41e4afc931 b065282342c2 d513d91e6c5b bb3b1bf6cdcf 35823a2c14a0 28a1e3c298d4 2e4623c32365 acf01ceea742]
	I0709 10:02:21.600482    4756 ssh_runner.go:195] Run: docker stop 27bf3f788b98 126d590bdbfd 9d2cb3cb7cbc a9d14b6f518e a6ab324aafde 3fcd6a9eb3d6 95d182c2aab4 5ceddb3777ed 4b0f2b8ca089 ff1ad7499190 8c72b844d05a 365aedee09e0 04b318e9d326 1a41e4afc931 b065282342c2 d513d91e6c5b bb3b1bf6cdcf 35823a2c14a0 28a1e3c298d4 2e4623c32365 acf01ceea742
	I0709 10:02:22.175176    4756 command_runner.go:130] > 27bf3f788b98
	I0709 10:02:22.176721    4756 command_runner.go:130] > 126d590bdbfd
	I0709 10:02:22.176721    4756 command_runner.go:130] > 9d2cb3cb7cbc
	I0709 10:02:22.176783    4756 command_runner.go:130] > a9d14b6f518e
	I0709 10:02:22.176783    4756 command_runner.go:130] > a6ab324aafde
	I0709 10:02:22.176783    4756 command_runner.go:130] > 3fcd6a9eb3d6
	I0709 10:02:22.176783    4756 command_runner.go:130] > 95d182c2aab4
	I0709 10:02:22.176783    4756 command_runner.go:130] > 5ceddb3777ed
	I0709 10:02:22.176783    4756 command_runner.go:130] > 4b0f2b8ca089
	I0709 10:02:22.176783    4756 command_runner.go:130] > ff1ad7499190
	I0709 10:02:22.176783    4756 command_runner.go:130] > 8c72b844d05a
	I0709 10:02:22.176783    4756 command_runner.go:130] > 365aedee09e0
	I0709 10:02:22.176783    4756 command_runner.go:130] > 04b318e9d326
	I0709 10:02:22.176783    4756 command_runner.go:130] > 1a41e4afc931
	I0709 10:02:22.176783    4756 command_runner.go:130] > b065282342c2
	I0709 10:02:22.176942    4756 command_runner.go:130] > d513d91e6c5b
	I0709 10:02:22.176983    4756 command_runner.go:130] > bb3b1bf6cdcf
	I0709 10:02:22.176983    4756 command_runner.go:130] > 35823a2c14a0
	I0709 10:02:22.176983    4756 command_runner.go:130] > 28a1e3c298d4
	I0709 10:02:22.176983    4756 command_runner.go:130] > 2e4623c32365
	I0709 10:02:22.176983    4756 command_runner.go:130] > acf01ceea742
	I0709 10:02:22.189130    4756 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0709 10:02:22.261091    4756 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 10:02:22.279229    4756 command_runner.go:130] > -rw------- 1 root root 5647 Jul  9 16:59 /etc/kubernetes/admin.conf
	I0709 10:02:22.279431    4756 command_runner.go:130] > -rw------- 1 root root 5658 Jul  9 16:59 /etc/kubernetes/controller-manager.conf
	I0709 10:02:22.279492    4756 command_runner.go:130] > -rw------- 1 root root 2007 Jul  9 16:59 /etc/kubernetes/kubelet.conf
	I0709 10:02:22.279515    4756 command_runner.go:130] > -rw------- 1 root root 5606 Jul  9 16:59 /etc/kubernetes/scheduler.conf
	I0709 10:02:22.279515    4756 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jul  9 16:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul  9 16:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul  9 16:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul  9 16:59 /etc/kubernetes/scheduler.conf
	
	I0709 10:02:22.290259    4756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0709 10:02:22.309977    4756 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0709 10:02:22.321286    4756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0709 10:02:22.343465    4756 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0709 10:02:22.362795    4756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0709 10:02:22.379231    4756 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0709 10:02:22.394780    4756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 10:02:22.420338    4756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0709 10:02:22.437377    4756 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0709 10:02:22.450094    4756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 10:02:22.478832    4756 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 10:02:22.495015    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0709 10:02:22.566058    4756 command_runner.go:130] > [certs] Using the existing "sa" key
	I0709 10:02:22.566058    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0709 10:02:23.831792    4756 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 10:02:23.834737    4756 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0709 10:02:23.834737    4756 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0709 10:02:23.834829    4756 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0709 10:02:23.834829    4756 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 10:02:23.834829    4756 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 10:02:23.834829    4756 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2687686s)
	I0709 10:02:23.834924    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0709 10:02:24.135835    4756 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 10:02:24.137850    4756 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 10:02:24.137893    4756 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 10:02:24.137933    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0709 10:02:24.217624    4756 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 10:02:24.217624    4756 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 10:02:24.217624    4756 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 10:02:24.217624    4756 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 10:02:24.217624    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0709 10:02:24.355971    4756 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 10:02:24.355971    4756 api_server.go:52] waiting for apiserver process to appear ...
	I0709 10:02:24.366207    4756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:02:24.871413    4756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:02:25.375535    4756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:02:25.872538    4756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:02:25.917297    4756 command_runner.go:130] > 5682
	I0709 10:02:25.917297    4756 api_server.go:72] duration metric: took 1.5613232s to wait for apiserver process to appear ...
	I0709 10:02:25.917297    4756 api_server.go:88] waiting for apiserver healthz status ...
	I0709 10:02:25.917297    4756 api_server.go:253] Checking apiserver healthz at https://172.18.200.147:8441/healthz ...
	I0709 10:02:28.926894    4756 api_server.go:279] https://172.18.200.147:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0709 10:02:28.927268    4756 api_server.go:103] status: https://172.18.200.147:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0709 10:02:28.927302    4756 api_server.go:253] Checking apiserver healthz at https://172.18.200.147:8441/healthz ...
	I0709 10:02:28.949669    4756 api_server.go:279] https://172.18.200.147:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0709 10:02:28.954402    4756 api_server.go:103] status: https://172.18.200.147:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0709 10:02:29.421590    4756 api_server.go:253] Checking apiserver healthz at https://172.18.200.147:8441/healthz ...
	I0709 10:02:29.431402    4756 api_server.go:279] https://172.18.200.147:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0709 10:02:29.431402    4756 api_server.go:103] status: https://172.18.200.147:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0709 10:02:29.922094    4756 api_server.go:253] Checking apiserver healthz at https://172.18.200.147:8441/healthz ...
	I0709 10:02:29.947045    4756 api_server.go:279] https://172.18.200.147:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0709 10:02:29.947045    4756 api_server.go:103] status: https://172.18.200.147:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0709 10:02:30.420255    4756 api_server.go:253] Checking apiserver healthz at https://172.18.200.147:8441/healthz ...
	I0709 10:02:30.432010    4756 api_server.go:279] https://172.18.200.147:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0709 10:02:30.432078    4756 api_server.go:103] status: https://172.18.200.147:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0709 10:02:30.919270    4756 api_server.go:253] Checking apiserver healthz at https://172.18.200.147:8441/healthz ...
	I0709 10:02:30.932473    4756 api_server.go:279] https://172.18.200.147:8441/healthz returned 200:
	ok
	I0709 10:02:30.933101    4756 round_trippers.go:463] GET https://172.18.200.147:8441/version
	I0709 10:02:30.933101    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:30.933101    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:30.933101    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:30.945722    4756 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0709 10:02:30.945722    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:30.945722    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:30.945815    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:30.945815    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:30.945815    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:30.945815    4756 round_trippers.go:580]     Content-Length: 263
	I0709 10:02:30.945815    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:30 GMT
	I0709 10:02:30.945815    4756 round_trippers.go:580]     Audit-Id: 8be07f14-9c6b-4b7b-9b1f-335d9c3295b2
	I0709 10:02:30.945905    4756 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 10:02:30.946027    4756 api_server.go:141] control plane version: v1.30.2
	I0709 10:02:30.946104    4756 api_server.go:131] duration metric: took 5.0287979s to wait for apiserver health ...
	I0709 10:02:30.946104    4756 cni.go:84] Creating CNI manager for ""
	I0709 10:02:30.946188    4756 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 10:02:30.948832    4756 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0709 10:02:30.961561    4756 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0709 10:02:30.983482    4756 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0709 10:02:31.021625    4756 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 10:02:31.021971    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods
	I0709 10:02:31.022014    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.022014    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.022014    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.022393    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:31.022393    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.031690    4756 round_trippers.go:580]     Audit-Id: a29d507e-8df8-4627-a1b5-71fefa58e0ab
	I0709 10:02:31.031690    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.031690    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.031690    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.031690    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.031690    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.033150    4756 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"578"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"537","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52234 chars]
	I0709 10:02:31.038750    4756 system_pods.go:59] 7 kube-system pods found
	I0709 10:02:31.038750    4756 system_pods.go:61] "coredns-7db6d8ff4d-xdj98" [7c8b499d-245f-49c6-a331-08fb299760f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0709 10:02:31.038750    4756 system_pods.go:61] "etcd-functional-779900" [77e0e55a-96ae-4741-9170-f410d4983d8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0709 10:02:31.038750    4756 system_pods.go:61] "kube-apiserver-functional-779900" [2734e2a5-de09-4b69-8d84-337699102a7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0709 10:02:31.038750    4756 system_pods.go:61] "kube-controller-manager-functional-779900" [8104e4b3-5582-409d-b0f0-6992a4848e48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0709 10:02:31.038750    4756 system_pods.go:61] "kube-proxy-g5gkf" [a62a1c0c-e952-4d3b-b01c-d26a621595e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0709 10:02:31.038750    4756 system_pods.go:61] "kube-scheduler-functional-779900" [181edfe6-dd20-4bbd-a373-f0aca8b60e77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0709 10:02:31.038750    4756 system_pods.go:61] "storage-provisioner" [551b9b07-edb0-4719-a113-2852a2d661b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0709 10:02:31.038750    4756 system_pods.go:74] duration metric: took 17.1247ms to wait for pod list to return data ...
	I0709 10:02:31.038750    4756 node_conditions.go:102] verifying NodePressure condition ...
	I0709 10:02:31.038750    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes
	I0709 10:02:31.038750    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.038750    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.038750    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.041387    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:31.041387    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.041387    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.041387    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.041387    4756 round_trippers.go:580]     Audit-Id: 7b2421b5-7538-4119-8d98-8733d8790903
	I0709 10:02:31.041387    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.041387    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.041387    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.041387    4756 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0709 10:02:31.049743    4756 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:02:31.050329    4756 node_conditions.go:123] node cpu capacity is 2
	I0709 10:02:31.050329    4756 node_conditions.go:105] duration metric: took 11.5787ms to run NodePressure ...
	I0709 10:02:31.050329    4756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0709 10:02:31.438631    4756 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 10:02:31.438631    4756 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 10:02:31.438631    4756 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0709 10:02:31.438631    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0709 10:02:31.438631    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.438631    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.438631    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.443049    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:31.443049    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.443049    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.443049    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.443049    4756 round_trippers.go:580]     Audit-Id: 70c908ff-9857-42b1-8ca7-6c416150d35b
	I0709 10:02:31.443049    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.443049    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.443049    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.443907    4756 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31514 chars]
	I0709 10:02:31.445875    4756 kubeadm.go:733] kubelet initialised
	I0709 10:02:31.445875    4756 kubeadm.go:734] duration metric: took 7.2444ms waiting for restarted kubelet to initialise ...
	I0709 10:02:31.445875    4756 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:02:31.446549    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods
	I0709 10:02:31.446549    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.446549    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.446549    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.449215    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:31.449215    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.449215    4756 round_trippers.go:580]     Audit-Id: fe8377fd-5466-4632-b474-22d35dd737ad
	I0709 10:02:31.449215    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.449215    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.449215    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.449215    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.449215    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.451516    4756 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"537","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52234 chars]
	I0709 10:02:31.453467    4756 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xdj98" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:31.453467    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:31.453467    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.453467    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.453467    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.459227    4756 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:02:31.459316    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.459316    4756 round_trippers.go:580]     Audit-Id: a68e9315-1a48-4842-bed1-ad361d674e6d
	I0709 10:02:31.459316    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.459316    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.459376    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.459410    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.459462    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.460157    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"537","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0709 10:02:31.460289    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:31.460289    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.460289    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.460289    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.463845    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:31.463845    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.463845    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.463845    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.463845    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.463845    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.463845    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.463845    4756 round_trippers.go:580]     Audit-Id: d30e47d1-b1e4-40e9-b4ab-0a3595fbab11
	I0709 10:02:31.464580    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:31.960708    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:31.960708    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.960708    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.960708    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.963995    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:31.964078    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.964127    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.964127    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.964127    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.964127    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.964127    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.964127    4756 round_trippers.go:580]     Audit-Id: d184da33-24c9-4914-90db-b3d88379936c
	I0709 10:02:31.964127    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"598","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0709 10:02:31.964882    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:31.964882    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:31.965471    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:31.965471    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:31.969039    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:31.969520    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:31.969520    4756 round_trippers.go:580]     Audit-Id: 1670fc13-e29d-4ed0-90a0-2225b1f2a25a
	I0709 10:02:31.969520    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:31.969618    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:31.969618    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:31.969618    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:31.969618    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:31 GMT
	I0709 10:02:31.969995    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:32.456219    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:32.456219    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:32.456325    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:32.456325    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:32.460910    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:32.460910    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:32.460987    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:32.460987    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:32.460987    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:32.460987    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:32 GMT
	I0709 10:02:32.460987    4756 round_trippers.go:580]     Audit-Id: f61b8959-4e58-4ca9-af5a-a697bdf220ea
	I0709 10:02:32.460987    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:32.461275    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"598","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0709 10:02:32.462205    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:32.462205    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:32.462276    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:32.462276    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:32.462500    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:32.462500    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:32.465505    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:32.465505    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:32.465505    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:32.465505    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:32.465597    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:32 GMT
	I0709 10:02:32.465597    4756 round_trippers.go:580]     Audit-Id: 3c825006-ad30-46f6-ac5c-92f52117fcc9
	I0709 10:02:32.465914    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:32.956678    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:32.956678    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:32.956678    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:32.956678    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:32.962973    4756 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:02:32.962973    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:32.962973    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:32.962973    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:32.962973    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:32.962973    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:32.962973    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:32 GMT
	I0709 10:02:32.962973    4756 round_trippers.go:580]     Audit-Id: 3a90953d-4422-410a-b99e-38917ec39b18
	I0709 10:02:32.963642    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"598","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0709 10:02:32.964597    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:32.964597    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:32.964597    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:32.964597    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:32.967048    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:32.968829    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:32.968829    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:32.968921    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:32 GMT
	I0709 10:02:32.968921    4756 round_trippers.go:580]     Audit-Id: 5691f1ce-d4bc-4344-a115-ae95f8ba3282
	I0709 10:02:32.968921    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:32.968921    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:32.968921    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:32.968921    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:33.467393    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:33.467565    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:33.467565    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:33.467565    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:33.468276    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:33.468276    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:33.468276    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:33.468276    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:33.468276    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:33.468276    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:33 GMT
	I0709 10:02:33.468276    4756 round_trippers.go:580]     Audit-Id: 0fecd1f1-ff62-46bd-a642-55dfa52fac1c
	I0709 10:02:33.472034    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:33.472282    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"598","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0709 10:02:33.473210    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:33.473210    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:33.473210    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:33.473210    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:33.478365    4756 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:02:33.478365    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:33.478365    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:33 GMT
	I0709 10:02:33.478365    4756 round_trippers.go:580]     Audit-Id: 10b2e35b-74eb-4867-8365-30531bb41b31
	I0709 10:02:33.478365    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:33.478365    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:33.478365    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:33.478365    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:33.479030    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:33.479609    4756 pod_ready.go:102] pod "coredns-7db6d8ff4d-xdj98" in "kube-system" namespace has status "Ready":"False"
	I0709 10:02:33.955502    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:33.955565    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:33.955565    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:33.955565    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:33.956043    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:33.956043    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:33.961963    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:33.961963    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:33.961963    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:33 GMT
	I0709 10:02:33.961963    4756 round_trippers.go:580]     Audit-Id: 7df0db73-2ce7-4c2f-af4e-521ffd578408
	I0709 10:02:33.961963    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:33.961963    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:33.962260    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"598","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0709 10:02:33.963131    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:33.963213    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:33.963213    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:33.963213    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:33.963465    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:33.963465    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:33.963465    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:33.963465    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:33 GMT
	I0709 10:02:33.963465    4756 round_trippers.go:580]     Audit-Id: b170495d-579e-4093-91ad-787c9d2c849a
	I0709 10:02:33.963465    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:33.963465    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:33.963465    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:33.966920    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:34.463041    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:34.463268    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:34.463268    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:34.463268    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:34.466933    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:34.466933    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:34.466933    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:34 GMT
	I0709 10:02:34.467040    4756 round_trippers.go:580]     Audit-Id: 7b639b87-3bbc-4fb6-9d2c-ea6136a5477b
	I0709 10:02:34.467040    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:34.467040    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:34.467040    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:34.467040    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:34.467258    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"598","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0709 10:02:34.468153    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:34.468215    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:34.468215    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:34.468215    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:34.468519    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:34.468519    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:34.468519    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:34 GMT
	I0709 10:02:34.468519    4756 round_trippers.go:580]     Audit-Id: 4cefe204-a4d6-4a29-a025-320e9fb293b0
	I0709 10:02:34.471175    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:34.471175    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:34.471175    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:34.471175    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:34.471621    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:34.967399    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:34.967492    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:34.967492    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:34.967492    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:34.973350    4756 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:02:34.973350    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:34.973350    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:34.973350    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:34.973350    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:34.973350    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:34.973350    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:34 GMT
	I0709 10:02:34.973350    4756 round_trippers.go:580]     Audit-Id: a2263720-f92c-434a-b896-706536eb94c0
	I0709 10:02:34.975263    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"599","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0709 10:02:34.975301    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:34.975301    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:34.975301    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:34.975301    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:34.977255    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:34.977255    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:34.977255    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:34.977255    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:34.977255    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:34.979605    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:34.979605    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:34 GMT
	I0709 10:02:34.979605    4756 round_trippers.go:580]     Audit-Id: 228d2aad-1f00-4153-a140-ef57b7af0773
	I0709 10:02:34.979838    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:34.979838    4756 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdj98" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:34.979838    4756 pod_ready.go:81] duration metric: took 3.5263653s for pod "coredns-7db6d8ff4d-xdj98" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:34.979838    4756 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:34.980373    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:34.980373    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:34.980373    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:34.980525    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:34.980686    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:34.980686    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:34.980686    4756 round_trippers.go:580]     Audit-Id: 2227b691-fca1-41bf-89d7-4d54ba37a7b8
	I0709 10:02:34.982997    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:34.982997    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:34.982997    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:34.982997    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:34.982997    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:34 GMT
	I0709 10:02:34.983153    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:34.983892    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:34.983975    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:34.983975    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:34.983975    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:34.984766    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:34.986891    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:34.986891    4756 round_trippers.go:580]     Audit-Id: 2c3c1e08-78ad-418c-8b7a-e49041aafec3
	I0709 10:02:34.986891    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:34.986891    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:34.986891    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:34.986891    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:34.986891    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:34 GMT
	I0709 10:02:34.987051    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:35.495123    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:35.495123    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:35.495271    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:35.495271    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:35.496785    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:35.499104    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:35.499104    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:35.499104    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:35.499104    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:35 GMT
	I0709 10:02:35.499104    4756 round_trippers.go:580]     Audit-Id: 3fd363c6-dd8d-4dc3-b8ac-d98add21c41a
	I0709 10:02:35.499104    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:35.499104    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:35.499525    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:35.500014    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:35.500014    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:35.500014    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:35.500014    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:35.503300    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:35.503300    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:35.503300    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:35.503300    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:35.503300    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:35.503300    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:35.503300    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:35 GMT
	I0709 10:02:35.503300    4756 round_trippers.go:580]     Audit-Id: 2bdb918c-3df2-4fdc-a28c-0d23be0b876c
	I0709 10:02:35.503688    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:35.981460    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:35.981591    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:35.981591    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:35.981591    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:35.982457    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:35.986019    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:35.986019    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:35 GMT
	I0709 10:02:35.986019    4756 round_trippers.go:580]     Audit-Id: 036cc531-9990-4c4f-a65a-309b8d49ed65
	I0709 10:02:35.986019    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:35.986019    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:35.986019    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:35.986019    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:35.986238    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:35.986850    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:35.986951    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:35.986951    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:35.986951    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:35.987183    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:35.990208    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:35.990208    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:35.990208    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:35.990208    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:35.990208    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:35.990208    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:35 GMT
	I0709 10:02:35.990208    4756 round_trippers.go:580]     Audit-Id: f20a9fad-0be5-4bf2-a948-c807b6455044
	I0709 10:02:35.990566    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:36.491876    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:36.491876    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:36.491876    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:36.491876    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:36.492417    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:36.496177    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:36.496177    4756 round_trippers.go:580]     Audit-Id: ccd4eca4-047c-400a-b0b5-d9a81932b29d
	I0709 10:02:36.496177    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:36.496177    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:36.496177    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:36.496177    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:36.496177    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:36 GMT
	I0709 10:02:36.496435    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:36.496968    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:36.497124    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:36.497124    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:36.497124    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:36.500379    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:36.500379    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:36.500379    4756 round_trippers.go:580]     Audit-Id: 894da4b0-89f7-4526-8b36-8f7a83b81d39
	I0709 10:02:36.500379    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:36.500379    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:36.500379    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:36.500379    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:36.500379    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:36 GMT
	I0709 10:02:36.500379    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:36.994647    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:36.994647    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:36.994647    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:36.994647    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:36.995182    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:36.999190    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:36.999190    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:36.999190    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:36.999190    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:36.999272    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:37 GMT
	I0709 10:02:36.999272    4756 round_trippers.go:580]     Audit-Id: 7b694803-c735-4477-801b-ab628fd61af9
	I0709 10:02:36.999272    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:36.999691    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:37.000438    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:37.000438    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:37.000527    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:37.000527    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:37.000760    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:37.000760    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:37.003686    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:37.003686    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:37.003686    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:37.003686    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:37 GMT
	I0709 10:02:37.003686    4756 round_trippers.go:580]     Audit-Id: 23c4d176-0d6a-439d-87c5-03a558fd5ec1
	I0709 10:02:37.003771    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:37.003922    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:37.004671    4756 pod_ready.go:102] pod "etcd-functional-779900" in "kube-system" namespace has status "Ready":"False"
	I0709 10:02:37.481853    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:37.481853    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:37.482003    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:37.482003    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:37.482365    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:37.487158    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:37.487158    4756 round_trippers.go:580]     Audit-Id: c3d1bd77-e9a7-498d-a4ca-670fdae1299f
	I0709 10:02:37.487158    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:37.487158    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:37.487158    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:37.487158    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:37.487158    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:37 GMT
	I0709 10:02:37.487315    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:37.488248    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:37.488318    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:37.488318    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:37.488318    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:37.488610    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:37.488610    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:37.491694    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:37 GMT
	I0709 10:02:37.491694    4756 round_trippers.go:580]     Audit-Id: 7564184d-bece-4ad3-8711-76317e479cf3
	I0709 10:02:37.491694    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:37.491694    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:37.491694    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:37.491694    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:37.492249    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:37.988201    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:37.988201    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:37.988201    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:37.988201    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:37.988765    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:37.993002    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:37.993002    4756 round_trippers.go:580]     Audit-Id: 5f54ae93-f40c-4f89-80cc-901adca6fcd6
	I0709 10:02:37.993002    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:37.993002    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:37.993002    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:37.993002    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:37.993002    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:37 GMT
	I0709 10:02:37.993002    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:37.994006    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:37.994006    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:37.994095    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:37.994095    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:37.994310    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:37.994310    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:37.994310    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:38 GMT
	I0709 10:02:37.994310    4756 round_trippers.go:580]     Audit-Id: bf8c2826-fd53-46a8-b5b3-d349fab9f572
	I0709 10:02:37.994310    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:37.994310    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:37.994310    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:37.994310    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:37.997479    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:38.492285    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:38.492489    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:38.492489    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:38.492489    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:38.492867    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:38.496135    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:38.496135    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:38.496135    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:38.496135    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:38.496135    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:38 GMT
	I0709 10:02:38.496241    4756 round_trippers.go:580]     Audit-Id: 13d9c982-fbb5-4fd3-9f39-cf24c2e131e0
	I0709 10:02:38.496241    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:38.496401    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:38.497002    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:38.497002    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:38.497002    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:38.497002    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:38.497362    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:38.497362    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:38.497362    4756 round_trippers.go:580]     Audit-Id: 14ccad3d-768f-4d5b-aa09-2cf4e684f34b
	I0709 10:02:38.500557    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:38.500557    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:38.500557    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:38.500557    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:38.500557    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:38 GMT
	I0709 10:02:38.500788    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:38.993986    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:38.993986    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:38.993986    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:38.993986    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:38.994521    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:38.994521    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:38.994521    4756 round_trippers.go:580]     Audit-Id: 91159187-c24f-4a2b-a368-75c8e147a814
	I0709 10:02:38.994521    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:38.998091    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:38.998091    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:38.998091    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:38.998091    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:39 GMT
	I0709 10:02:38.998252    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:38.999054    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:38.999137    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:38.999137    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:38.999137    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:39.003617    4756 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:02:39.003720    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:39.003720    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:39 GMT
	I0709 10:02:39.003801    4756 round_trippers.go:580]     Audit-Id: 5245bc75-3d80-4ed7-a2a7-5a1fba0db657
	I0709 10:02:39.003833    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:39.003833    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:39.003833    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:39.003833    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:39.004709    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:39.005605    4756 pod_ready.go:102] pod "etcd-functional-779900" in "kube-system" namespace has status "Ready":"False"
	I0709 10:02:39.488074    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:39.488188    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:39.488188    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:39.488188    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:39.488450    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:39.488450    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:39.488450    4756 round_trippers.go:580]     Audit-Id: 11c941c1-c134-4fd1-89bd-fb91f4b4c136
	I0709 10:02:39.492067    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:39.492067    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:39.492067    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:39.492067    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:39.492067    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:39 GMT
	I0709 10:02:39.492203    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:39.493004    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:39.493073    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:39.493073    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:39.493073    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:39.493255    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:39.493255    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:39.493255    4756 round_trippers.go:580]     Audit-Id: bf967b36-cd78-4c43-bee0-3ecfcaefafc6
	I0709 10:02:39.493255    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:39.493255    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:39.493255    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:39.493255    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:39.493255    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:39 GMT
	I0709 10:02:39.496743    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:39.995720    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:39.995813    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:39.995813    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:39.995813    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:39.996059    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:40.000516    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:40.000516    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:40.000516    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:40.000689    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:40.000689    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:40 GMT
	I0709 10:02:40.000689    4756 round_trippers.go:580]     Audit-Id: bfdaada8-40cf-4e1a-be19-dff203353e54
	I0709 10:02:40.000689    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:40.000909    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:40.001699    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:40.001778    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:40.001778    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:40.001778    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:40.005078    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:40.005734    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:40.005734    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:40.005767    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:40 GMT
	I0709 10:02:40.005767    4756 round_trippers.go:580]     Audit-Id: fbf5b41b-46e0-4320-8309-58c438f105b2
	I0709 10:02:40.005767    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:40.005767    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:40.005767    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:40.005767    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:40.488482    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:40.488482    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:40.488482    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:40.488482    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:40.496786    4756 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 10:02:40.497318    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:40.497318    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:40.497318    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:40 GMT
	I0709 10:02:40.497318    4756 round_trippers.go:580]     Audit-Id: 87c63388-e517-49ad-81b2-7cd9ade57378
	I0709 10:02:40.497318    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:40.497318    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:40.497318    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:40.497448    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:40.498358    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:40.498358    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:40.498358    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:40.498358    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:40.500596    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:40.500596    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:40.500596    4756 round_trippers.go:580]     Audit-Id: cdfa7f51-7a05-40b7-8a1d-a474f934229b
	I0709 10:02:40.500596    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:40.500596    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:40.500596    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:40.501775    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:40.501775    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:40 GMT
	I0709 10:02:40.501926    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:40.991656    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:40.991656    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:40.991656    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:40.991780    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:40.992070    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:40.992070    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:40.992070    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:40.992070    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:40.992070    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:41 GMT
	I0709 10:02:40.992070    4756 round_trippers.go:580]     Audit-Id: 191836e5-147a-4db2-ab14-028d9875bf89
	I0709 10:02:40.992070    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:40.992070    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:40.996254    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:40.996895    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:40.996895    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:40.996895    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:40.996895    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:40.997887    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:40.997887    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:40.997887    4756 round_trippers.go:580]     Audit-Id: c8fae36c-95ee-4ff9-98b3-42612f908279
	I0709 10:02:40.997887    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:40.997887    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:40.997887    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:40.997887    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:40.997887    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:41 GMT
	I0709 10:02:40.997887    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:41.490369    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:41.490468    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:41.490468    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:41.490468    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:41.490775    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:41.494633    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:41.494633    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:41.494633    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:41.494633    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:41.494633    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:41.494716    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:41 GMT
	I0709 10:02:41.494716    4756 round_trippers.go:580]     Audit-Id: 95f8c8c4-37cd-4192-b17c-0ee0aaef81a0
	I0709 10:02:41.494861    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:41.496446    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:41.496446    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:41.496446    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:41.496528    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:41.497196    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:41.499744    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:41.499744    4756 round_trippers.go:580]     Audit-Id: 156606b5-f53a-4620-9208-c9143ec616d4
	I0709 10:02:41.499744    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:41.499744    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:41.499870    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:41.499870    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:41.499870    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:41 GMT
	I0709 10:02:41.500101    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:41.500720    4756 pod_ready.go:102] pod "etcd-functional-779900" in "kube-system" namespace has status "Ready":"False"
	I0709 10:02:41.988676    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:41.988676    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:41.988676    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:41.988676    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:42.005111    4756 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 10:02:42.012976    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:42.013012    4756 round_trippers.go:580]     Audit-Id: 52fb4dea-3447-475e-92fe-e294a7359d57
	I0709 10:02:42.013051    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:42.013087    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:42.013087    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:42.013087    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:42.013134    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:42 GMT
	I0709 10:02:42.013377    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:42.013670    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:42.014259    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:42.014259    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:42.014259    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:42.014582    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:42.014582    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:42.014582    4756 round_trippers.go:580]     Audit-Id: 3cf12627-f69f-4cd1-82f2-4d1a677b3726
	I0709 10:02:42.020166    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:42.020166    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:42.020166    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:42.020166    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:42.020214    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:42 GMT
	I0709 10:02:42.020399    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:42.491572    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:42.491743    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:42.491743    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:42.491743    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:42.495746    4756 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:02:42.495746    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:42.495746    4756 round_trippers.go:580]     Audit-Id: e247ab92-07d4-4e3c-a649-7c758ead94cd
	I0709 10:02:42.495746    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:42.495746    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:42.495746    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:42.495746    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:42.495746    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:42 GMT
	I0709 10:02:42.495746    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:42.496569    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:42.496569    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:42.496569    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:42.496569    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:42.498951    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:42.498951    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:42.498951    4756 round_trippers.go:580]     Audit-Id: 412db39d-aa07-4315-a2ae-20794efd6865
	I0709 10:02:42.498951    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:42.498951    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:42.498951    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:42.499620    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:42.499620    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:42 GMT
	I0709 10:02:42.499693    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:42.983823    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:42.983823    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:42.983823    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:42.983823    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:42.987393    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:42.987393    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:42.987393    4756 round_trippers.go:580]     Audit-Id: 93768044-5b4a-4e11-966b-e03370a27dda
	I0709 10:02:42.987393    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:42.987393    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:42.987483    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:42.987483    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:42.987483    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:42 GMT
	I0709 10:02:42.987689    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:42.988067    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:42.988067    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:42.988067    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:42.988067    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:42.988749    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:42.988749    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:42.988749    4756 round_trippers.go:580]     Audit-Id: ca6af7a1-a58c-44a7-a674-aa62a71c86c3
	I0709 10:02:42.988749    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:42.988749    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:42.988749    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:42.992383    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:42.992383    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:42 GMT
	I0709 10:02:42.992860    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:43.490733    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:43.490811    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:43.490953    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:43.490953    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:43.497960    4756 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:02:43.497960    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:43.497960    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:43.497960    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:43.497960    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:43 GMT
	I0709 10:02:43.497960    4756 round_trippers.go:580]     Audit-Id: c91059ce-9ab8-4093-a001-691d77319576
	I0709 10:02:43.497960    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:43.497960    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:43.498564    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"533","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6744 chars]
	I0709 10:02:43.499592    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:43.499592    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:43.499669    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:43.499669    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:43.501061    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:43.501061    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:43.501061    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:43.501061    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:43.501061    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:43.501061    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:43 GMT
	I0709 10:02:43.501061    4756 round_trippers.go:580]     Audit-Id: 48f10799-3840-4183-803a-7365a314a50f
	I0709 10:02:43.501061    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:43.501061    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:43.504106    4756 pod_ready.go:102] pod "etcd-functional-779900" in "kube-system" namespace has status "Ready":"False"
	I0709 10:02:43.986891    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:43.987092    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:43.987092    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:43.987092    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:43.987344    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:43.987344    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:43.987344    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:43.987344    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:43.987344    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:43.987344    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:43.987344    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:43 GMT
	I0709 10:02:43.987344    4756 round_trippers.go:580]     Audit-Id: 9b15e50d-3617-4c2e-b9f2-f61dcbe16852
	I0709 10:02:43.990954    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"610","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6520 chars]
	I0709 10:02:43.991671    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:43.991671    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:43.991763    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:43.991763    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:43.998304    4756 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:02:43.998304    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:43.998304    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:43.998304    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:43.998304    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:43.998304    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:43.998304    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:43.998304    4756 round_trippers.go:580]     Audit-Id: 7d39efbf-f20c-4fe7-85eb-fcd707170ce7
	I0709 10:02:43.998304    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:43.999462    4756 pod_ready.go:92] pod "etcd-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:43.999462    4756 pod_ready.go:81] duration metric: took 9.0196078s for pod "etcd-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:43.999625    4756 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:43.999863    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-779900
	I0709 10:02:43.999863    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:43.999910    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:43.999910    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.002466    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:44.003462    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.003462    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.003462    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.003462    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.003462    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.003505    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.003505    4756 round_trippers.go:580]     Audit-Id: 33ba0171-4fae-4f58-91a7-64eaa08111b2
	I0709 10:02:44.003505    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-779900","namespace":"kube-system","uid":"2734e2a5-de09-4b69-8d84-337699102a7c","resourceVersion":"531","creationTimestamp":"2024-07-09T16:59:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.200.147:8441","kubernetes.io/config.hash":"934c6388817366f3058a75c13d7f3c1a","kubernetes.io/config.mirror":"934c6388817366f3058a75c13d7f3c1a","kubernetes.io/config.seen":"2024-07-09T16:59:52.779666345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T16:59:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8296 chars]
	I0709 10:02:44.004487    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:44.004599    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.004599    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.004599    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.006017    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:44.006017    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.006017    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.006017    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.006017    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.007610    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.007610    4756 round_trippers.go:580]     Audit-Id: 7f6ea956-a143-468e-80ad-2de054e25056
	I0709 10:02:44.007610    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.007792    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:44.500558    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-779900
	I0709 10:02:44.500597    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.500659    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.500712    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.503157    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:44.503662    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.503662    4756 round_trippers.go:580]     Audit-Id: 596640ba-f9b1-4102-b73e-6f4ade66232f
	I0709 10:02:44.503662    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.503662    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.503662    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.503662    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.503754    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.503939    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-779900","namespace":"kube-system","uid":"2734e2a5-de09-4b69-8d84-337699102a7c","resourceVersion":"612","creationTimestamp":"2024-07-09T16:59:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.200.147:8441","kubernetes.io/config.hash":"934c6388817366f3058a75c13d7f3c1a","kubernetes.io/config.mirror":"934c6388817366f3058a75c13d7f3c1a","kubernetes.io/config.seen":"2024-07-09T16:59:52.779666345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T16:59:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8052 chars]
	I0709 10:02:44.504582    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:44.504582    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.504582    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.504582    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.508395    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:44.508395    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.508395    4756 round_trippers.go:580]     Audit-Id: 3cf640d4-3560-41fd-93c6-14196b1ae6af
	I0709 10:02:44.508498    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.508498    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.508498    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.508498    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.508498    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.508557    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:44.508557    4756 pod_ready.go:92] pod "kube-apiserver-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:44.509100    4756 pod_ready.go:81] duration metric: took 509.3428ms for pod "kube-apiserver-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.509100    4756 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.509202    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-779900
	I0709 10:02:44.509202    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.509202    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.509202    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.509955    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:44.509955    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.512242    4756 round_trippers.go:580]     Audit-Id: 25681b6e-392e-497e-9f0d-5b2d82bb6c73
	I0709 10:02:44.512242    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.512242    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.512242    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.512242    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.512371    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.512711    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-779900","namespace":"kube-system","uid":"8104e4b3-5582-409d-b0f0-6992a4848e48","resourceVersion":"600","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d70c8c9df410abfe9fc20ad7c5213d0","kubernetes.io/config.mirror":"4d70c8c9df410abfe9fc20ad7c5213d0","kubernetes.io/config.seen":"2024-07-09T17:00:00.082755789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0709 10:02:44.513263    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:44.513263    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.513339    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.513339    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.515573    4756 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:02:44.515930    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.515930    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.515969    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.516002    4756 round_trippers.go:580]     Audit-Id: 1192b967-fcbf-4cc6-9c7c-eef27640123e
	I0709 10:02:44.516002    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.516002    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.516002    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.516002    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:44.516002    4756 pod_ready.go:92] pod "kube-controller-manager-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:44.516533    4756 pod_ready.go:81] duration metric: took 6.9015ms for pod "kube-controller-manager-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.516533    4756 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g5gkf" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.517010    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-proxy-g5gkf
	I0709 10:02:44.517010    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.517010    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.517010    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.517646    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:44.520590    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.520650    4756 round_trippers.go:580]     Audit-Id: f5b7022b-b5b7-48ce-9f62-4c2b3f8ae443
	I0709 10:02:44.520650    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.520650    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.520720    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.520720    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.520720    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.520848    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5gkf","generateName":"kube-proxy-","namespace":"kube-system","uid":"a62a1c0c-e952-4d3b-b01c-d26a621595e3","resourceVersion":"596","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2802b511-3573-49a5-83f8-1e4f1886a5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2802b511-3573-49a5-83f8-1e4f1886a5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0709 10:02:44.521554    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:44.521591    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.521630    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.521630    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.523029    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:44.524859    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.524943    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.524943    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.524943    4756 round_trippers.go:580]     Audit-Id: b5dfb914-2d4d-42fd-b404-4b0bc2eda939
	I0709 10:02:44.524943    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.524943    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.524943    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.525345    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:44.525838    4756 pod_ready.go:92] pod "kube-proxy-g5gkf" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:44.525876    4756 pod_ready.go:81] duration metric: took 9.2799ms for pod "kube-proxy-g5gkf" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.525876    4756 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.525985    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-779900
	I0709 10:02:44.525985    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.525985    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.525985    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.526643    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:44.526643    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.526643    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.526643    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.526643    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.526643    4756 round_trippers.go:580]     Audit-Id: 28243844-8b93-4ba7-bfd9-5d9a099bf4d3
	I0709 10:02:44.526643    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.526643    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.526643    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-779900","namespace":"kube-system","uid":"181edfe6-dd20-4bbd-a373-f0aca8b60e77","resourceVersion":"603","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eecb2139fb325a7366540d856cf8fe95","kubernetes.io/config.mirror":"eecb2139fb325a7366540d856cf8fe95","kubernetes.io/config.seen":"2024-07-09T17:00:00.082756689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5459 chars]
	I0709 10:02:44.529339    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:44.529396    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.529396    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.529396    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.541913    4756 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0709 10:02:44.544190    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.544190    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.544190    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.544262    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.544262    4756 round_trippers.go:580]     Audit-Id: f42d6966-d78e-4bb2-9e5f-e156d3630c09
	I0709 10:02:44.544262    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.544262    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.544262    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:44.544806    4756 pod_ready.go:92] pod "kube-scheduler-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:44.544806    4756 pod_ready.go:81] duration metric: took 18.9299ms for pod "kube-scheduler-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.544806    4756 pod_ready.go:38] duration metric: took 13.0989075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:02:44.544950    4756 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 10:02:44.563006    4756 command_runner.go:130] > -16
	I0709 10:02:44.563006    4756 ops.go:34] apiserver oom_adj: -16
	I0709 10:02:44.563095    4756 kubeadm.go:591] duration metric: took 23.143179s to restartPrimaryControlPlane
	I0709 10:02:44.563095    4756 kubeadm.go:393] duration metric: took 23.2448221s to StartCluster
	I0709 10:02:44.563160    4756 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:02:44.563305    4756 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:02:44.564622    4756 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:02:44.566386    4756 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.200.147 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:02:44.566386    4756 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 10:02:44.566922    4756 addons.go:69] Setting storage-provisioner=true in profile "functional-779900"
	I0709 10:02:44.566963    4756 addons.go:69] Setting default-storageclass=true in profile "functional-779900"
	I0709 10:02:44.566963    4756 addons.go:234] Setting addon storage-provisioner=true in "functional-779900"
	W0709 10:02:44.567226    4756 addons.go:243] addon storage-provisioner should already be in state true
	I0709 10:02:44.567350    4756 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:02:44.567030    4756 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-779900"
	I0709 10:02:44.567420    4756 host.go:66] Checking if "functional-779900" exists ...
	I0709 10:02:44.570529    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:02:44.570736    4756 out.go:177] * Verifying Kubernetes components...
	I0709 10:02:44.571068    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:02:44.592010    4756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:02:44.864522    4756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:02:44.890837    4756 node_ready.go:35] waiting up to 6m0s for node "functional-779900" to be "Ready" ...
	I0709 10:02:44.891149    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:44.891216    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.891216    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.891331    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.898781    4756 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:02:44.898781    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.898781    4756 round_trippers.go:580]     Audit-Id: 2aa898dd-e0a2-436f-81e5-afd2a7508bba
	I0709 10:02:44.898781    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.898781    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.898781    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.898781    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.898781    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.899570    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:44.899613    4756 node_ready.go:49] node "functional-779900" has status "Ready":"True"
	I0709 10:02:44.899613    4756 node_ready.go:38] duration metric: took 8.7762ms for node "functional-779900" to be "Ready" ...
	I0709 10:02:44.899613    4756 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:02:44.900196    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods
	I0709 10:02:44.900196    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.900196    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.900196    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.901982    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:44.901982    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.901982    4756 round_trippers.go:580]     Audit-Id: 6bf2b830-97dd-4230-b585-05afdffe874c
	I0709 10:02:44.901982    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.901982    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.901982    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.901982    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.901982    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:44 GMT
	I0709 10:02:44.905325    4756 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"612"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"599","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50807 chars]
	I0709 10:02:44.907819    4756 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdj98" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:44.989036    4756 request.go:629] Waited for 80.8434ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:44.989164    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdj98
	I0709 10:02:44.989164    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:44.989270    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:44.989270    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:44.997903    4756 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 10:02:44.997903    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:44.997903    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:44.997903    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:45 GMT
	I0709 10:02:44.997903    4756 round_trippers.go:580]     Audit-Id: f65a38ad-a999-4fbf-a433-8b3d14309a60
	I0709 10:02:44.997903    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:44.997903    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:44.997903    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:44.998685    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"599","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0709 10:02:45.197702    4756 request.go:629] Waited for 197.6766ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:45.197907    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:45.197907    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:45.198016    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:45.198016    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:45.198428    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:45.202439    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:45.202439    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:45.202513    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:45.202513    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:45 GMT
	I0709 10:02:45.202513    4756 round_trippers.go:580]     Audit-Id: 0cd7e505-2536-438f-90ca-a1ab19013daa
	I0709 10:02:45.202582    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:45.202582    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:45.202750    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:45.203392    4756 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdj98" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:45.203625    4756 pod_ready.go:81] duration metric: took 295.6557ms for pod "coredns-7db6d8ff4d-xdj98" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:45.203662    4756 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:45.391544    4756 request.go:629] Waited for 187.5953ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:45.391648    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/etcd-functional-779900
	I0709 10:02:45.391648    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:45.391648    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:45.391648    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:45.401330    4756 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:02:45.401735    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:45.401735    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:45.401735    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:45.401735    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:45.401735    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:45 GMT
	I0709 10:02:45.401735    4756 round_trippers.go:580]     Audit-Id: 104268fe-b825-47e7-b8fd-9b61b27abf91
	I0709 10:02:45.401735    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:45.402047    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-779900","namespace":"kube-system","uid":"77e0e55a-96ae-4741-9170-f410d4983d8f","resourceVersion":"610","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.200.147:2379","kubernetes.io/config.hash":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.mirror":"0b76b90bc7a928d70e36053eeb3e34b6","kubernetes.io/config.seen":"2024-07-09T17:00:00.082751189Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6520 chars]
	I0709 10:02:45.590852    4756 request.go:629] Waited for 188.2684ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:45.591104    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:45.591104    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:45.591104    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:45.591229    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:45.597754    4756 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:02:45.597754    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:45.597754    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:45.597754    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:45 GMT
	I0709 10:02:45.597754    4756 round_trippers.go:580]     Audit-Id: 6435bcd8-2398-4972-97db-f4222eb04a3b
	I0709 10:02:45.597754    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:45.597754    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:45.597754    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:45.598346    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:45.598562    4756 pod_ready.go:92] pod "etcd-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:45.598562    4756 pod_ready.go:81] duration metric: took 394.8994ms for pod "etcd-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:45.598562    4756 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:45.802748    4756 request.go:629] Waited for 204.0469ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-779900
	I0709 10:02:45.802950    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-779900
	I0709 10:02:45.802950    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:45.802950    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:45.802950    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:45.803283    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:45.803283    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:45.803283    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:45.807378    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:45.807378    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:45 GMT
	I0709 10:02:45.807378    4756 round_trippers.go:580]     Audit-Id: 5638d879-242d-470a-a5d6-fcdfe5dcc040
	I0709 10:02:45.807378    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:45.807378    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:45.807842    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-779900","namespace":"kube-system","uid":"2734e2a5-de09-4b69-8d84-337699102a7c","resourceVersion":"612","creationTimestamp":"2024-07-09T16:59:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.200.147:8441","kubernetes.io/config.hash":"934c6388817366f3058a75c13d7f3c1a","kubernetes.io/config.mirror":"934c6388817366f3058a75c13d7f3c1a","kubernetes.io/config.seen":"2024-07-09T16:59:52.779666345Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T16:59:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8052 chars]
	I0709 10:02:45.994336    4756 request.go:629] Waited for 186.0047ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:45.994336    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:45.994461    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:45.994461    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:45.994511    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:45.995218    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:45.998659    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:45.998659    4756 round_trippers.go:580]     Audit-Id: 386f294d-cadf-45e8-8eeb-94ccc23fa769
	I0709 10:02:45.998659    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:45.998659    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:45.998659    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:45.998659    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:45.998659    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:46 GMT
	I0709 10:02:45.998821    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:45.999384    4756 pod_ready.go:92] pod "kube-apiserver-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:45.999384    4756 pod_ready.go:81] duration metric: took 400.8208ms for pod "kube-apiserver-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:45.999384    4756 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:46.202631    4756 request.go:629] Waited for 202.9167ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-779900
	I0709 10:02:46.202631    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-779900
	I0709 10:02:46.202631    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:46.202631    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:46.202631    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:46.203071    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:46.207108    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:46.207108    4756 round_trippers.go:580]     Audit-Id: 2d394473-6949-4905-8d80-3ac55281ed72
	I0709 10:02:46.207108    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:46.207108    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:46.207108    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:46.207108    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:46.207108    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:46 GMT
	I0709 10:02:46.207572    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-779900","namespace":"kube-system","uid":"8104e4b3-5582-409d-b0f0-6992a4848e48","resourceVersion":"600","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d70c8c9df410abfe9fc20ad7c5213d0","kubernetes.io/config.mirror":"4d70c8c9df410abfe9fc20ad7c5213d0","kubernetes.io/config.seen":"2024-07-09T17:00:00.082755789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0709 10:02:46.395681    4756 request.go:629] Waited for 187.2487ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:46.395972    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:46.396029    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:46.396029    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:46.396029    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:46.396316    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:46.401028    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:46.401028    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:46.401028    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:46 GMT
	I0709 10:02:46.401028    4756 round_trippers.go:580]     Audit-Id: 5feae2ea-635a-4137-9516-f4b328d6f918
	I0709 10:02:46.401028    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:46.401028    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:46.401028    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:46.401266    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:46.401884    4756 pod_ready.go:92] pod "kube-controller-manager-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:46.401884    4756 pod_ready.go:81] duration metric: took 402.4995ms for pod "kube-controller-manager-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:46.401884    4756 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g5gkf" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:46.589228    4756 request.go:629] Waited for 187.2393ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-proxy-g5gkf
	I0709 10:02:46.589228    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-proxy-g5gkf
	I0709 10:02:46.589228    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:46.589228    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:46.589228    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:46.589918    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:46.589918    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:46.589918    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:46.594055    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:46.594055    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:46 GMT
	I0709 10:02:46.594055    4756 round_trippers.go:580]     Audit-Id: c796c1fe-18bf-421e-a123-40e187819774
	I0709 10:02:46.594055    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:46.594055    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:46.594408    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5gkf","generateName":"kube-proxy-","namespace":"kube-system","uid":"a62a1c0c-e952-4d3b-b01c-d26a621595e3","resourceVersion":"596","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2802b511-3573-49a5-83f8-1e4f1886a5c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2802b511-3573-49a5-83f8-1e4f1886a5c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0709 10:02:46.740283    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:02:46.740283    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:46.752604    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:02:46.752660    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:46.753465    4756 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:02:46.754311    4756 kapi.go:59] client config for functional-779900: &rest.Config{Host:"https://172.18.200.147:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-779900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-779900\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 10:02:46.755411    4756 addons.go:234] Setting addon default-storageclass=true in "functional-779900"
	W0709 10:02:46.755411    4756 addons.go:243] addon default-storageclass should already be in state true
	I0709 10:02:46.755582    4756 host.go:66] Checking if "functional-779900" exists ...
	I0709 10:02:46.755730    4756 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 10:02:46.757118    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:02:46.758850    4756 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 10:02:46.758850    4756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 10:02:46.759056    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:02:46.788229    4756 request.go:629] Waited for 193.1825ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:46.788438    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:46.788438    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:46.788438    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:46.788570    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:46.788824    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:46.793129    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:46.793129    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:46.793129    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:46.793291    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:46 GMT
	I0709 10:02:46.793291    4756 round_trippers.go:580]     Audit-Id: 89984c9a-9f2a-4bb2-b3ad-d7f49b22cec9
	I0709 10:02:46.793315    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:46.793315    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:46.793638    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:46.794182    4756 pod_ready.go:92] pod "kube-proxy-g5gkf" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:46.794182    4756 pod_ready.go:81] duration metric: took 392.2974ms for pod "kube-proxy-g5gkf" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:46.794182    4756 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:46.992805    4756 request.go:629] Waited for 198.313ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-779900
	I0709 10:02:46.992916    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-779900
	I0709 10:02:46.992916    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:46.992916    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:46.992987    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:46.993802    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:46.993802    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:46.993802    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:47 GMT
	I0709 10:02:46.997768    4756 round_trippers.go:580]     Audit-Id: e1df8a45-9830-4140-918f-f39e3088e173
	I0709 10:02:46.997768    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:46.997768    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:46.997768    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:46.997768    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:46.998088    4756 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-779900","namespace":"kube-system","uid":"181edfe6-dd20-4bbd-a373-f0aca8b60e77","resourceVersion":"603","creationTimestamp":"2024-07-09T17:00:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eecb2139fb325a7366540d856cf8fe95","kubernetes.io/config.mirror":"eecb2139fb325a7366540d856cf8fe95","kubernetes.io/config.seen":"2024-07-09T17:00:00.082756689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5459 chars]
	I0709 10:02:47.189109    4756 request.go:629] Waited for 190.3037ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:47.189277    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes/functional-779900
	I0709 10:02:47.189277    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:47.189277    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:47.189341    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:47.193788    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:47.193788    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:47.193788    4756 round_trippers.go:580]     Audit-Id: b38ab883-4d1f-432d-a3ce-d6725cbc1fa0
	I0709 10:02:47.193883    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:47.193943    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:47.193943    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:47.193943    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:47.194014    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:47 GMT
	I0709 10:02:47.194392    4756 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-09T16:59:56Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0709 10:02:47.194857    4756 pod_ready.go:92] pod "kube-scheduler-functional-779900" in "kube-system" namespace has status "Ready":"True"
	I0709 10:02:47.194857    4756 pod_ready.go:81] duration metric: took 400.6745ms for pod "kube-scheduler-functional-779900" in "kube-system" namespace to be "Ready" ...
	I0709 10:02:47.194857    4756 pod_ready.go:38] duration metric: took 2.2952398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:02:47.194857    4756 api_server.go:52] waiting for apiserver process to appear ...
	I0709 10:02:47.222480    4756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:02:47.254355    4756 command_runner.go:130] > 5682
	I0709 10:02:47.254677    4756 api_server.go:72] duration metric: took 2.6882306s to wait for apiserver process to appear ...
	I0709 10:02:47.254677    4756 api_server.go:88] waiting for apiserver healthz status ...
	I0709 10:02:47.254677    4756 api_server.go:253] Checking apiserver healthz at https://172.18.200.147:8441/healthz ...
	I0709 10:02:47.261760    4756 api_server.go:279] https://172.18.200.147:8441/healthz returned 200:
	ok
	I0709 10:02:47.263520    4756 round_trippers.go:463] GET https://172.18.200.147:8441/version
	I0709 10:02:47.263520    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:47.263617    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:47.263617    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:47.266803    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:47.266803    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:47.266803    4756 round_trippers.go:580]     Audit-Id: 30509ab8-518e-43b4-ae23-bd70211f0821
	I0709 10:02:47.266803    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:47.266803    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:47.266896    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:47.266896    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:47.266896    4756 round_trippers.go:580]     Content-Length: 263
	I0709 10:02:47.266896    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:47 GMT
	I0709 10:02:47.266896    4756 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 10:02:47.267018    4756 api_server.go:141] control plane version: v1.30.2
	I0709 10:02:47.267018    4756 api_server.go:131] duration metric: took 12.3415ms to wait for apiserver health ...
	I0709 10:02:47.267018    4756 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 10:02:47.389099    4756 request.go:629] Waited for 121.7769ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods
	I0709 10:02:47.389353    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods
	I0709 10:02:47.389424    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:47.389424    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:47.389424    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:47.395140    4756 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:02:47.395140    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:47.395140    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:47.395140    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:47 GMT
	I0709 10:02:47.395140    4756 round_trippers.go:580]     Audit-Id: 5cdd0542-27ff-4aa8-bfa3-7d22c7a53df6
	I0709 10:02:47.395140    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:47.395140    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:47.395140    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:47.396749    4756 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"612"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"599","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50807 chars]
	I0709 10:02:47.398979    4756 system_pods.go:59] 7 kube-system pods found
	I0709 10:02:47.398979    4756 system_pods.go:61] "coredns-7db6d8ff4d-xdj98" [7c8b499d-245f-49c6-a331-08fb299760f7] Running
	I0709 10:02:47.399518    4756 system_pods.go:61] "etcd-functional-779900" [77e0e55a-96ae-4741-9170-f410d4983d8f] Running
	I0709 10:02:47.399518    4756 system_pods.go:61] "kube-apiserver-functional-779900" [2734e2a5-de09-4b69-8d84-337699102a7c] Running
	I0709 10:02:47.399518    4756 system_pods.go:61] "kube-controller-manager-functional-779900" [8104e4b3-5582-409d-b0f0-6992a4848e48] Running
	I0709 10:02:47.399518    4756 system_pods.go:61] "kube-proxy-g5gkf" [a62a1c0c-e952-4d3b-b01c-d26a621595e3] Running
	I0709 10:02:47.399518    4756 system_pods.go:61] "kube-scheduler-functional-779900" [181edfe6-dd20-4bbd-a373-f0aca8b60e77] Running
	I0709 10:02:47.399518    4756 system_pods.go:61] "storage-provisioner" [551b9b07-edb0-4719-a113-2852a2d661b6] Running
	I0709 10:02:47.399518    4756 system_pods.go:74] duration metric: took 132.4994ms to wait for pod list to return data ...
	I0709 10:02:47.399603    4756 default_sa.go:34] waiting for default service account to be created ...
	I0709 10:02:47.597784    4756 request.go:629] Waited for 197.9288ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/default/serviceaccounts
	I0709 10:02:47.597999    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/default/serviceaccounts
	I0709 10:02:47.598038    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:47.598038    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:47.598072    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:47.598298    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:47.598298    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:47.598298    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:47.598298    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:47.601964    4756 round_trippers.go:580]     Content-Length: 261
	I0709 10:02:47.601964    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:47 GMT
	I0709 10:02:47.601964    4756 round_trippers.go:580]     Audit-Id: 880037b2-d1cd-4569-9caa-ac3a42ba425e
	I0709 10:02:47.601964    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:47.601964    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:47.601964    4756 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"612"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d0a323bb-4683-4f1e-a0f1-8600b712040f","resourceVersion":"349","creationTimestamp":"2024-07-09T17:00:15Z"}}]}
	I0709 10:02:47.602424    4756 default_sa.go:45] found service account: "default"
	I0709 10:02:47.602424    4756 default_sa.go:55] duration metric: took 202.8206ms for default service account to be created ...
	I0709 10:02:47.602424    4756 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 10:02:47.794112    4756 request.go:629] Waited for 191.4698ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods
	I0709 10:02:47.794194    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/namespaces/kube-system/pods
	I0709 10:02:47.794337    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:47.794337    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:47.794396    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:47.794582    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:47.794582    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:47.799693    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:47 GMT
	I0709 10:02:47.799693    4756 round_trippers.go:580]     Audit-Id: d287058e-9d21-4cb4-8b9d-5008623ba498
	I0709 10:02:47.799693    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:47.799693    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:47.799693    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:47.799693    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:47.801888    4756 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"612"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-xdj98","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"7c8b499d-245f-49c6-a331-08fb299760f7","resourceVersion":"599","creationTimestamp":"2024-07-09T17:00:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d6dd93a9-2f1f-4758-b16c-6e654dd14161","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T17:00:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6dd93a9-2f1f-4758-b16c-6e654dd14161\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50807 chars]
	I0709 10:02:47.804218    4756 system_pods.go:86] 7 kube-system pods found
	I0709 10:02:47.804218    4756 system_pods.go:89] "coredns-7db6d8ff4d-xdj98" [7c8b499d-245f-49c6-a331-08fb299760f7] Running
	I0709 10:02:47.804218    4756 system_pods.go:89] "etcd-functional-779900" [77e0e55a-96ae-4741-9170-f410d4983d8f] Running
	I0709 10:02:47.804218    4756 system_pods.go:89] "kube-apiserver-functional-779900" [2734e2a5-de09-4b69-8d84-337699102a7c] Running
	I0709 10:02:47.804218    4756 system_pods.go:89] "kube-controller-manager-functional-779900" [8104e4b3-5582-409d-b0f0-6992a4848e48] Running
	I0709 10:02:47.804218    4756 system_pods.go:89] "kube-proxy-g5gkf" [a62a1c0c-e952-4d3b-b01c-d26a621595e3] Running
	I0709 10:02:47.804218    4756 system_pods.go:89] "kube-scheduler-functional-779900" [181edfe6-dd20-4bbd-a373-f0aca8b60e77] Running
	I0709 10:02:47.804309    4756 system_pods.go:89] "storage-provisioner" [551b9b07-edb0-4719-a113-2852a2d661b6] Running
	I0709 10:02:47.804309    4756 system_pods.go:126] duration metric: took 201.8846ms to wait for k8s-apps to be running ...
	I0709 10:02:47.804309    4756 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 10:02:47.814156    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:02:47.842003    4756 system_svc.go:56] duration metric: took 37.5673ms WaitForService to wait for kubelet
	I0709 10:02:47.842003    4756 kubeadm.go:576] duration metric: took 3.2756111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 10:02:47.842084    4756 node_conditions.go:102] verifying NodePressure condition ...
	I0709 10:02:47.994941    4756 request.go:629] Waited for 152.8565ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.200.147:8441/api/v1/nodes
	I0709 10:02:47.994941    4756 round_trippers.go:463] GET https://172.18.200.147:8441/api/v1/nodes
	I0709 10:02:47.994941    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:47.994941    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:47.995205    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:47.995416    4756 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:02:47.995416    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:47.998977    4756 round_trippers.go:580]     Audit-Id: dc4856d9-0eeb-4096-b2cc-eebefd9b4b3a
	I0709 10:02:47.998977    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:47.998977    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:47.998977    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:47.998977    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:47.998977    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:48 GMT
	I0709 10:02:47.999169    4756 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"612"},"items":[{"metadata":{"name":"functional-779900","uid":"cea4eb9c-4cc1-47df-b2f8-c29a0a018443","resourceVersion":"527","creationTimestamp":"2024-07-09T16:59:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-779900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"functional-779900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T10_00_00_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0709 10:02:47.999596    4756 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:02:47.999694    4756 node_conditions.go:123] node cpu capacity is 2
	I0709 10:02:47.999694    4756 node_conditions.go:105] duration metric: took 157.6098ms to run NodePressure ...
	I0709 10:02:47.999694    4756 start.go:240] waiting for startup goroutines ...
	I0709 10:02:48.972979    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:02:48.972979    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:48.973096    4756 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 10:02:48.973096    4756 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 10:02:48.973237    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
	I0709 10:02:48.980706    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:02:48.980706    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:48.980706    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:02:51.226218    4756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:02:51.238822    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:51.238969    4756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
	I0709 10:02:51.613275    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:02:51.613275    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:51.624677    4756 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
	I0709 10:02:51.763863    4756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 10:02:52.566221    4756 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0709 10:02:52.566290    4756 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0709 10:02:52.566369    4756 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0709 10:02:52.566369    4756 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0709 10:02:52.566369    4756 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0709 10:02:52.566431    4756 command_runner.go:130] > pod/storage-provisioner configured
	I0709 10:02:53.774777    4756 main.go:141] libmachine: [stdout =====>] : 172.18.200.147
	
	I0709 10:02:53.774777    4756 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:02:53.781187    4756 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
	I0709 10:02:53.910746    4756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 10:02:54.071368    4756 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0709 10:02:54.071752    4756 round_trippers.go:463] GET https://172.18.200.147:8441/apis/storage.k8s.io/v1/storageclasses
	I0709 10:02:54.071778    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:54.071778    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:54.071778    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:54.072920    4756 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:02:54.076492    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:54.076594    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:54.076594    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:54.076594    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:54.076594    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:54.076594    4756 round_trippers.go:580]     Content-Length: 1273
	I0709 10:02:54.076594    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:54 GMT
	I0709 10:02:54.076594    4756 round_trippers.go:580]     Audit-Id: 19cb6f0c-0ed4-4347-9c97-f904477dd8c0
	I0709 10:02:54.076775    4756 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"619"},"items":[{"metadata":{"name":"standard","uid":"e23bb9e4-c558-4c32-baa9-5b8cd5d6cf9b","resourceVersion":"435","creationTimestamp":"2024-07-09T17:00:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T17:00:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 10:02:54.077973    4756 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e23bb9e4-c558-4c32-baa9-5b8cd5d6cf9b","resourceVersion":"435","creationTimestamp":"2024-07-09T17:00:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T17:00:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 10:02:54.078142    4756 round_trippers.go:463] PUT https://172.18.200.147:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 10:02:54.078142    4756 round_trippers.go:469] Request Headers:
	I0709 10:02:54.078142    4756 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:02:54.078221    4756 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:02:54.078221    4756 round_trippers.go:473]     Content-Type: application/json
	I0709 10:02:54.082194    4756 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:02:54.083071    4756 round_trippers.go:577] Response Headers:
	I0709 10:02:54.083071    4756 round_trippers.go:580]     Audit-Id: 179dddf8-8968-412d-bc30-8ad7f7f3be1a
	I0709 10:02:54.083071    4756 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 10:02:54.083071    4756 round_trippers.go:580]     Content-Type: application/json
	I0709 10:02:54.083071    4756 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f73f8235-e866-4263-a5ae-9a2c93c3abd3
	I0709 10:02:54.083071    4756 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cf8f7f5-a8da-4a31-b8eb-5312346d378e
	I0709 10:02:54.083071    4756 round_trippers.go:580]     Content-Length: 1220
	I0709 10:02:54.083071    4756 round_trippers.go:580]     Date: Tue, 09 Jul 2024 17:02:54 GMT
	I0709 10:02:54.083269    4756 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e23bb9e4-c558-4c32-baa9-5b8cd5d6cf9b","resourceVersion":"435","creationTimestamp":"2024-07-09T17:00:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T17:00:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 10:02:54.086862    4756 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 10:02:54.090109    4756 addons.go:510] duration metric: took 9.5237062s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 10:02:54.090229    4756 start.go:245] waiting for cluster config update ...
	I0709 10:02:54.090229    4756 start.go:254] writing updated cluster config ...
	I0709 10:02:54.103722    4756 ssh_runner.go:195] Run: rm -f paused
	I0709 10:02:54.247342    4756 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0709 10:02:54.251606    4756 out.go:177] * Done! kubectl is now configured to use "functional-779900" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.892172042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.892511845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.983333657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.983507359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.983543859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.983741661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.983993463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.984169465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.984189065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:29 functional-779900 dockerd[4388]: time="2024-07-09T17:02:29.984744970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:30 functional-779900 cri-dockerd[4674]: time="2024-07-09T17:02:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/69c3ccd56b4bf260d56dbcae6bdbf1502973910236ff61f8f208ce49fb8da231/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:02:30 functional-779900 cri-dockerd[4674]: time="2024-07-09T17:02:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cf28e6d15b4a632fc188af1c9bd93f5e3f763c4a39ad495c20dbc0088ddf293b/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:02:30 functional-779900 cri-dockerd[4674]: time="2024-07-09T17:02:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1aa772f9f987fff5baf85c3bc0e223cdbe7ed95e49edf563d87b171ab8a767c/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.434896754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.435061755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.435143056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.435337057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.454739630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.457140051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.457307352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.457537854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.743390089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.743533290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.743654291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:02:30 functional-779900 dockerd[4388]: time="2024-07-09T17:02:30.745595608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac866a5f0d724       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   c1aa772f9f987       coredns-7db6d8ff4d-xdj98
	53dff0daea9c7       53c535741fb44       2 minutes ago       Running             kube-proxy                1                   cf28e6d15b4a6       kube-proxy-g5gkf
	3e92dc92265fc       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   69c3ccd56b4bf       storage-provisioner
	8a4903e308029       7820c83aa1394       2 minutes ago       Running             kube-scheduler            2                   c96dfa2612735       kube-scheduler-functional-779900
	41c6f9da32777       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   7bbc0dfb5d49c       etcd-functional-779900
	6a451eb47ca93       e874818b3caac       2 minutes ago       Running             kube-controller-manager   2                   ace47761fb2c1       kube-controller-manager-functional-779900
	66fac08cc2f44       56ce0fd9fb532       2 minutes ago       Running             kube-apiserver            2                   cad0b6cde78e3       kube-apiserver-functional-779900
	036f9fa363902       6e38f40d628db       2 minutes ago       Created             storage-provisioner       1                   a9d14b6f518e5       storage-provisioner
	f8947b6882c0d       7820c83aa1394       2 minutes ago       Created             kube-scheduler            1                   a6ab324aafded       kube-scheduler-functional-779900
	62bf4cb620aad       3861cfcd7c04c       2 minutes ago       Created             etcd                      1                   126d590bdbfd6       etcd-functional-779900
	ecba6464984b5       56ce0fd9fb532       2 minutes ago       Created             kube-apiserver            1                   9d2cb3cb7cbcd       kube-apiserver-functional-779900
	27bf3f788b985       e874818b3caac       2 minutes ago       Exited              kube-controller-manager   1                   3fcd6a9eb3d6a       kube-controller-manager-functional-779900
	ff1ad74991908       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   365aedee09e0f       coredns-7db6d8ff4d-xdj98
	8c72b844d05ab       53c535741fb44       4 minutes ago       Exited              kube-proxy                0                   04b318e9d326c       kube-proxy-g5gkf
	
	
	==> coredns [ac866a5f0d72] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38224 - 36581 "HINFO IN 5007542590055846275.4101018025772234189. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041369469s
	
	
	==> coredns [ff1ad7499190] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[1640098843]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (09-Jul-2024 17:00:17.474) (total time: 27734ms):
	Trace[1640098843]: ---"Objects listed" error:<nil> 27734ms (17:00:45.208)
	Trace[1640098843]: [27.734627657s] [27.734627657s] END
	[INFO] plugin/kubernetes: Trace[1623779088]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (09-Jul-2024 17:00:17.474) (total time: 27736ms):
	Trace[1623779088]: ---"Objects listed" error:<nil> 27736ms (17:00:45.210)
	Trace[1623779088]: [27.736187063s] [27.736187063s] END
	[INFO] plugin/kubernetes: Trace[525724574]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (09-Jul-2024 17:00:17.474) (total time: 27737ms):
	Trace[525724574]: ---"Objects listed" error:<nil> 27737ms (17:00:45.211)
	Trace[525724574]: [27.737262064s] [27.737262064s] END
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35426 - 53837 "HINFO IN 8080776471931353258.6507284865152149358. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046454699s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-779900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-779900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=functional-779900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T10_00_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 16:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-779900
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:04:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 17:04:31 +0000   Tue, 09 Jul 2024 16:59:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 17:04:31 +0000   Tue, 09 Jul 2024 16:59:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 17:04:31 +0000   Tue, 09 Jul 2024 16:59:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 17:04:31 +0000   Tue, 09 Jul 2024 17:00:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.200.147
	  Hostname:    functional-779900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbb183ad6b904b53a1bde0719deccd8c
	  System UUID:                3a1dc171-d88f-8f4c-8acd-a17430cff31d
	  Boot ID:                    6e19a689-5afa-4fc9-85a7-eb129d6472d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-xdj98                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m20s
	  kube-system                 etcd-functional-779900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-apiserver-functional-779900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-controller-manager-functional-779900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-g5gkf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-scheduler-functional-779900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m18s                  kube-proxy       
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 4m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m43s)  kubelet          Node functional-779900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m43s)  kubelet          Node functional-779900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m43s)  kubelet          Node functional-779900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m35s                  kubelet          Node functional-779900 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m35s                  kubelet          Node functional-779900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s                  kubelet          Node functional-779900 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m35s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m33s                  kubelet          Node functional-779900 status is now: NodeReady
	  Normal  RegisteredNode           4m21s                  node-controller  Node functional-779900 event: Registered Node functional-779900 in Controller
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node functional-779900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node functional-779900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node functional-779900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                   node-controller  Node functional-779900 event: Registered Node functional-779900 in Controller
	
	
	==> dmesg <==
	[  +5.076197] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.621830] systemd-fstab-generator[1672]: Ignoring "noauto" option for root device
	[  +6.016923] systemd-fstab-generator[1866]: Ignoring "noauto" option for root device
	[  +0.088743] kauditd_printk_skb: 36 callbacks suppressed
	[  +7.518108] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +0.115600] kauditd_printk_skb: 62 callbacks suppressed
	[Jul 9 17:00] systemd-fstab-generator[2526]: Ignoring "noauto" option for root device
	[  +0.177880] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.016736] kauditd_printk_skb: 88 callbacks suppressed
	[ +22.124132] kauditd_printk_skb: 10 callbacks suppressed
	[Jul 9 17:02] systemd-fstab-generator[3920]: Ignoring "noauto" option for root device
	[  +0.620905] systemd-fstab-generator[3956]: Ignoring "noauto" option for root device
	[  +0.239705] systemd-fstab-generator[3968]: Ignoring "noauto" option for root device
	[  +0.299334] systemd-fstab-generator[3982]: Ignoring "noauto" option for root device
	[  +5.342829] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.966234] systemd-fstab-generator[4627]: Ignoring "noauto" option for root device
	[  +0.217114] systemd-fstab-generator[4639]: Ignoring "noauto" option for root device
	[  +0.199248] systemd-fstab-generator[4651]: Ignoring "noauto" option for root device
	[  +0.275195] systemd-fstab-generator[4666]: Ignoring "noauto" option for root device
	[  +0.851681] systemd-fstab-generator[4827]: Ignoring "noauto" option for root device
	[  +0.804541] kauditd_printk_skb: 139 callbacks suppressed
	[  +3.261680] systemd-fstab-generator[5389]: Ignoring "noauto" option for root device
	[  +1.941339] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.198284] kauditd_printk_skb: 37 callbacks suppressed
	[ +13.549479] systemd-fstab-generator[6372]: Ignoring "noauto" option for root device
	
	
	==> etcd [41c6f9da3277] <==
	{"level":"info","ts":"2024-07-09T17:02:26.107658Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-09T17:02:26.10767Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-09T17:02:26.107966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c22e472069d823e4 switched to configuration voters=(13992199296827007972)"}
	{"level":"info","ts":"2024-07-09T17:02:26.108047Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2b1a163d6777c3a0","local-member-id":"c22e472069d823e4","added-peer-id":"c22e472069d823e4","added-peer-peer-urls":["https://172.18.200.147:2380"]}
	{"level":"info","ts":"2024-07-09T17:02:26.10815Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2b1a163d6777c3a0","local-member-id":"c22e472069d823e4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T17:02:26.108183Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T17:02:26.127004Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-09T17:02:26.132429Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.18.200.147:2380"}
	{"level":"info","ts":"2024-07-09T17:02:26.13317Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.18.200.147:2380"}
	{"level":"info","ts":"2024-07-09T17:02:26.132978Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c22e472069d823e4","initial-advertise-peer-urls":["https://172.18.200.147:2380"],"listen-peer-urls":["https://172.18.200.147:2380"],"advertise-client-urls":["https://172.18.200.147:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.18.200.147:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-09T17:02:26.132997Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-09T17:02:27.492629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c22e472069d823e4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-09T17:02:27.492695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c22e472069d823e4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-09T17:02:27.492734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c22e472069d823e4 received MsgPreVoteResp from c22e472069d823e4 at term 2"}
	{"level":"info","ts":"2024-07-09T17:02:27.492766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c22e472069d823e4 became candidate at term 3"}
	{"level":"info","ts":"2024-07-09T17:02:27.492772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c22e472069d823e4 received MsgVoteResp from c22e472069d823e4 at term 3"}
	{"level":"info","ts":"2024-07-09T17:02:27.492782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c22e472069d823e4 became leader at term 3"}
	{"level":"info","ts":"2024-07-09T17:02:27.49279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c22e472069d823e4 elected leader c22e472069d823e4 at term 3"}
	{"level":"info","ts":"2024-07-09T17:02:27.505436Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c22e472069d823e4","local-member-attributes":"{Name:functional-779900 ClientURLs:[https://172.18.200.147:2379]}","request-path":"/0/members/c22e472069d823e4/attributes","cluster-id":"2b1a163d6777c3a0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-09T17:02:27.505791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T17:02:27.508268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-09T17:02:27.511286Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T17:02:27.511706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-09T17:02:27.512065Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-09T17:02:27.514651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.200.147:2379"}
	
	
	==> etcd [62bf4cb620aa] <==
	
	
	==> kernel <==
	 17:04:35 up 6 min,  0 users,  load average: 0.29, 0.38, 0.19
	Linux functional-779900 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [66fac08cc2f4] <==
	I0709 17:02:28.993271       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 17:02:28.996555       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0709 17:02:28.996588       1 policy_source.go:224] refreshing policies
	I0709 17:02:29.057720       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 17:02:29.058994       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 17:02:29.065372       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0709 17:02:29.067708       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0709 17:02:29.068862       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 17:02:29.069318       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0709 17:02:29.069352       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0709 17:02:29.070961       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0709 17:02:29.071071       1 aggregator.go:165] initial CRD sync complete...
	I0709 17:02:29.071118       1 autoregister_controller.go:141] Starting autoregister controller
	I0709 17:02:29.071124       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0709 17:02:29.071147       1 cache.go:39] Caches are synced for autoregister controller
	I0709 17:02:29.072103       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0709 17:02:29.096720       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0709 17:02:29.869137       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 17:02:31.286492       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 17:02:31.303271       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 17:02:31.357249       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 17:02:31.417810       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 17:02:31.428169       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 17:02:42.105921       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 17:02:42.117661       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [ecba6464984b] <==
	
	
	==> kube-controller-manager [27bf3f788b98] <==
	
	
	==> kube-controller-manager [6a451eb47ca9] <==
	I0709 17:02:42.082297       1 shared_informer.go:320] Caches are synced for namespace
	I0709 17:02:42.083016       1 shared_informer.go:320] Caches are synced for crt configmap
	I0709 17:02:42.083175       1 shared_informer.go:320] Caches are synced for stateful set
	I0709 17:02:42.084946       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0709 17:02:42.085399       1 shared_informer.go:320] Caches are synced for attach detach
	I0709 17:02:42.086285       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0709 17:02:42.086602       1 shared_informer.go:320] Caches are synced for expand
	I0709 17:02:42.086749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="140.199µs"
	I0709 17:02:42.090343       1 shared_informer.go:320] Caches are synced for ephemeral
	I0709 17:02:42.090348       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0709 17:02:42.093748       1 shared_informer.go:320] Caches are synced for GC
	I0709 17:02:42.096760       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0709 17:02:42.098678       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0709 17:02:42.100255       1 shared_informer.go:320] Caches are synced for daemon sets
	I0709 17:02:42.103002       1 shared_informer.go:320] Caches are synced for endpoint
	I0709 17:02:42.124771       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0709 17:02:42.136655       1 shared_informer.go:320] Caches are synced for PV protection
	I0709 17:02:42.172907       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0709 17:02:42.280792       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 17:02:42.299749       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 17:02:42.314623       1 shared_informer.go:320] Caches are synced for disruption
	I0709 17:02:42.339764       1 shared_informer.go:320] Caches are synced for deployment
	I0709 17:02:42.742232       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 17:02:42.757281       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 17:02:42.757446       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [53dff0daea9c] <==
	I0709 17:02:30.800167       1 server_linux.go:69] "Using iptables proxy"
	I0709 17:02:30.818685       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.200.147"]
	I0709 17:02:30.912575       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 17:02:30.912746       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 17:02:30.912787       1 server_linux.go:165] "Using iptables Proxier"
	I0709 17:02:30.921316       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 17:02:30.921786       1 server.go:872] "Version info" version="v1.30.2"
	I0709 17:02:30.922165       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 17:02:30.923760       1 config.go:192] "Starting service config controller"
	I0709 17:02:30.923805       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 17:02:30.923834       1 config.go:101] "Starting endpoint slice config controller"
	I0709 17:02:30.923839       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 17:02:30.924973       1 config.go:319] "Starting node config controller"
	I0709 17:02:30.925001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 17:02:31.024536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 17:02:31.024706       1 shared_informer.go:320] Caches are synced for service config
	I0709 17:02:31.025606       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8c72b844d05a] <==
	I0709 17:00:17.452903       1 server_linux.go:69] "Using iptables proxy"
	I0709 17:00:17.475763       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.200.147"]
	I0709 17:00:17.530872       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 17:00:17.530963       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 17:00:17.530983       1 server_linux.go:165] "Using iptables Proxier"
	I0709 17:00:17.535497       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 17:00:17.536121       1 server.go:872] "Version info" version="v1.30.2"
	I0709 17:00:17.536157       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 17:00:17.537723       1 config.go:192] "Starting service config controller"
	I0709 17:00:17.537894       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 17:00:17.537924       1 config.go:101] "Starting endpoint slice config controller"
	I0709 17:00:17.537929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 17:00:17.538656       1 config.go:319] "Starting node config controller"
	I0709 17:00:17.538692       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 17:00:17.638162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 17:00:17.638355       1 shared_informer.go:320] Caches are synced for service config
	I0709 17:00:17.639215       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8a4903e30802] <==
	I0709 17:02:26.978887       1 serving.go:380] Generated self-signed cert in-memory
	W0709 17:02:28.950435       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0709 17:02:28.950760       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0709 17:02:28.950920       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0709 17:02:28.951119       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0709 17:02:28.992938       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0709 17:02:28.993924       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 17:02:28.996361       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0709 17:02:28.997283       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0709 17:02:28.997468       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0709 17:02:28.997655       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0709 17:02:29.098551       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f8947b6882c0] <==
	
	
	==> kubelet <==
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.038855    5396 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: E0709 17:02:29.090727    5396 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-functional-779900\" already exists" pod="kube-system/kube-controller-manager-functional-779900"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.262293    5396 apiserver.go:52] "Watching apiserver"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.265343    5396 topology_manager.go:215] "Topology Admit Handler" podUID="7c8b499d-245f-49c6-a331-08fb299760f7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xdj98"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.265478    5396 topology_manager.go:215] "Topology Admit Handler" podUID="a62a1c0c-e952-4d3b-b01c-d26a621595e3" podNamespace="kube-system" podName="kube-proxy-g5gkf"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.265568    5396 topology_manager.go:215] "Topology Admit Handler" podUID="551b9b07-edb0-4719-a113-2852a2d661b6" podNamespace="kube-system" podName="storage-provisioner"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.268176    5396 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.358829    5396 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/551b9b07-edb0-4719-a113-2852a2d661b6-tmp\") pod \"storage-provisioner\" (UID: \"551b9b07-edb0-4719-a113-2852a2d661b6\") " pod="kube-system/storage-provisioner"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.359023    5396 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a62a1c0c-e952-4d3b-b01c-d26a621595e3-lib-modules\") pod \"kube-proxy-g5gkf\" (UID: \"a62a1c0c-e952-4d3b-b01c-d26a621595e3\") " pod="kube-system/kube-proxy-g5gkf"
	Jul 09 17:02:29 functional-779900 kubelet[5396]: I0709 17:02:29.359095    5396 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a62a1c0c-e952-4d3b-b01c-d26a621595e3-xtables-lock\") pod \"kube-proxy-g5gkf\" (UID: \"a62a1c0c-e952-4d3b-b01c-d26a621595e3\") " pod="kube-system/kube-proxy-g5gkf"
	Jul 09 17:02:30 functional-779900 kubelet[5396]: I0709 17:02:30.112554    5396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c3ccd56b4bf260d56dbcae6bdbf1502973910236ff61f8f208ce49fb8da231"
	Jul 09 17:02:30 functional-779900 kubelet[5396]: I0709 17:02:30.323436    5396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1aa772f9f987fff5baf85c3bc0e223cdbe7ed95e49edf563d87b171ab8a767c"
	Jul 09 17:02:30 functional-779900 kubelet[5396]: I0709 17:02:30.671921    5396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf28e6d15b4a632fc188af1c9bd93f5e3f763c4a39ad495c20dbc0088ddf293b"
	Jul 09 17:02:32 functional-779900 kubelet[5396]: I0709 17:02:32.768827    5396 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 09 17:02:34 functional-779900 kubelet[5396]: I0709 17:02:34.679462    5396 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 09 17:03:24 functional-779900 kubelet[5396]: E0709 17:03:24.321251    5396 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:03:24 functional-779900 kubelet[5396]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:03:24 functional-779900 kubelet[5396]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:03:24 functional-779900 kubelet[5396]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:03:24 functional-779900 kubelet[5396]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 17:04:24 functional-779900 kubelet[5396]: E0709 17:04:24.292624    5396 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:04:24 functional-779900 kubelet[5396]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:04:24 functional-779900 kubelet[5396]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:04:24 functional-779900 kubelet[5396]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:04:24 functional-779900 kubelet[5396]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [036f9fa36390] <==
	
	
	==> storage-provisioner [3e92dc92265f] <==
	I0709 17:02:30.662315       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0709 17:02:30.718856       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0709 17:02:30.719424       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0709 17:02:48.152460       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0709 17:02:48.152838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-779900_22dae3dd-403e-4a46-92b6-a7bfcecdf5fd!
	I0709 17:02:48.154680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"284f1eb5-9ef4-4f03-ad8d-c602e3a1d2ba", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-779900_22dae3dd-403e-4a46-92b6-a7bfcecdf5fd became leader
	I0709 17:02:48.253768       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-779900_22dae3dd-403e-4a46-92b6-a7bfcecdf5fd!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:04:27.887492    1188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-779900 -n functional-779900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-779900 -n functional-779900: (12.0060481s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-779900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (33.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-779900 config unset cpus" to be -""- but got *"W0709 10:07:40.506860    9844 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 config get cpus: exit status 14 (196.9888ms)

                                                
                                                
** stderr ** 
	W0709 10:07:40.734441    3424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-779900 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0709 10:07:40.734441    3424 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-779900 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0709 10:07:40.934352    9268 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-779900 config get cpus" to be -""- but got *"W0709 10:07:41.210146    7660 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-779900 config unset cpus" to be -""- but got *"W0709 10:07:41.439574    7476 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 config get cpus: exit status 14 (179.2066ms)

                                                
                                                
** stderr ** 
	W0709 10:07:41.649874    1640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-779900 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0709 10:07:41.649874    1640 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 service --namespace=default --https --url hello-node: exit status 1 (15.020891s)

                                                
                                                
** stderr ** 
	W0709 10:09:32.570632    7108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-779900 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 service hello-node --url --format={{.IP}}: exit status 1 (15.0304562s)

                                                
                                                
** stderr ** 
	W0709 10:09:47.618067   15268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-779900 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 service hello-node --url: exit status 1 (15.0282657s)

                                                
                                                
** stderr ** 
	W0709 10:10:02.631695    7216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-779900 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (68.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-q8dt8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-q8dt8 -- sh -c "ping -c 1 172.18.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-q8dt8 -- sh -c "ping -c 1 172.18.192.1": exit status 1 (10.4315229s)

                                                
                                                
-- stdout --
	PING 172.18.192.1 (172.18.192.1): 56 data bytes
	
	--- 172.18.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:28:10.909286    2680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.18.192.1) from pod (busybox-fc5497c4f-q8dt8): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-sf672 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-sf672 -- sh -c "ping -c 1 172.18.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-sf672 -- sh -c "ping -c 1 172.18.192.1": exit status 1 (10.4479376s)

                                                
                                                
-- stdout --
	PING 172.18.192.1 (172.18.192.1): 56 data bytes
	
	--- 172.18.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:28:21.807249    7736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.18.192.1) from pod (busybox-fc5497c4f-sf672): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-wvs72 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-wvs72 -- sh -c "ping -c 1 172.18.192.1"
E0709 10:28:33.292932   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-wvs72 -- sh -c "ping -c 1 172.18.192.1": exit status 1 (10.4556944s)

                                                
                                                
-- stdout --
	PING 172.18.192.1 (172.18.192.1): 56 data bytes
	
	--- 172.18.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:28:32.735714    6008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.18.192.1) from pod (busybox-fc5497c4f-wvs72): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-400600 -n ha-400600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-400600 -n ha-400600: (12.5230142s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 logs -n 25: (8.8254605s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | functional-779900 ssh pgrep          | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:12 PDT |                     |
	|         | buildkitd                            |                   |                   |         |                     |                     |
	| image   | functional-779900 image build -t     | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:12 PDT | 09 Jul 24 10:12 PDT |
	|         | localhost/my-image:functional-779900 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-779900 image ls           | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:12 PDT | 09 Jul 24 10:12 PDT |
	| delete  | -p functional-779900                 | functional-779900 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:14 PDT | 09 Jul 24 10:16 PDT |
	| start   | -p ha-400600 --wait=true             | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:16 PDT | 09 Jul 24 10:27 PDT |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- apply -f             | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:27 PDT | 09 Jul 24 10:27 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- rollout status       | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:27 PDT | 09 Jul 24 10:28 PDT |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- get pods -o          | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- get pods -o          | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-q8dt8 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-sf672 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-wvs72 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-q8dt8 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-sf672 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-wvs72 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-q8dt8 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-sf672 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-wvs72 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- get pods -o          | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-q8dt8              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT |                     |
	|         | busybox-fc5497c4f-q8dt8 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-sf672              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT |                     |
	|         | busybox-fc5497c4f-sf672 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT | 09 Jul 24 10:28 PDT |
	|         | busybox-fc5497c4f-wvs72              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-400600 -- exec                 | ha-400600         | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:28 PDT |                     |
	|         | busybox-fc5497c4f-wvs72 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 10:16:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 10:16:02.755734    6700 out.go:291] Setting OutFile to fd 1532 ...
	I0709 10:16:02.756323    6700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:16:02.756323    6700 out.go:304] Setting ErrFile to fd 1372...
	I0709 10:16:02.756323    6700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:16:02.781053    6700 out.go:298] Setting JSON to false
	I0709 10:16:02.782532    6700 start.go:129] hostinfo: {"hostname":"minikube1","uptime":3631,"bootTime":1720541731,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 10:16:02.782532    6700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 10:16:02.796698    6700 out.go:177] * [ha-400600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 10:16:02.800534    6700 notify.go:220] Checking for updates...
	I0709 10:16:02.803091    6700 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:16:02.804820    6700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 10:16:02.807779    6700 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 10:16:02.814808    6700 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 10:16:02.818871    6700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 10:16:02.821273    6700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 10:16:08.035987    6700 out.go:177] * Using the hyperv driver based on user configuration
	I0709 10:16:08.039729    6700 start.go:297] selected driver: hyperv
	I0709 10:16:08.039729    6700 start.go:901] validating driver "hyperv" against <nil>
	I0709 10:16:08.039729    6700 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 10:16:08.086300    6700 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 10:16:08.088400    6700 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 10:16:08.088400    6700 cni.go:84] Creating CNI manager for ""
	I0709 10:16:08.088400    6700 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 10:16:08.088400    6700 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 10:16:08.088400    6700 start.go:340] cluster config:
	{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:16:08.089479    6700 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 10:16:08.097177    6700 out.go:177] * Starting "ha-400600" primary control-plane node in "ha-400600" cluster
	I0709 10:16:08.102857    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:16:08.102857    6700 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 10:16:08.102857    6700 cache.go:56] Caching tarball of preloaded images
	I0709 10:16:08.103408    6700 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 10:16:08.103655    6700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 10:16:08.104197    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:16:08.104197    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json: {Name:mkd46017acd4713454e4339419b70af7bfbb4b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:16:08.105617    6700 start.go:360] acquireMachinesLock for ha-400600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 10:16:08.105617    6700 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-400600"
	I0709 10:16:08.105617    6700 start.go:93] Provisioning new machine with config: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:16:08.106218    6700 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 10:16:08.111683    6700 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 10:16:08.112335    6700 start.go:159] libmachine.API.Create for "ha-400600" (driver="hyperv")
	I0709 10:16:08.112335    6700 client.go:168] LocalClient.Create starting
	I0709 10:16:08.112528    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 10:16:08.113194    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:16:08.113237    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:16:08.113489    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 10:16:08.113736    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:16:08.113736    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:16:08.113736    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 10:16:10.121839    6700 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 10:16:10.124605    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:10.124689    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 10:16:11.863932    6700 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 10:16:11.863932    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:11.864030    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:16:13.256383    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:16:13.256476    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:13.256476    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:16:16.660501    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:16:16.672572    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:16.675158    6700 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 10:16:17.188138    6700 main.go:141] libmachine: Creating SSH key...
	I0709 10:16:17.276605    6700 main.go:141] libmachine: Creating VM...
	I0709 10:16:17.276605    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:16:19.966362    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:16:19.966362    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:19.966362    6700 main.go:141] libmachine: Using switch "Default Switch"
	I0709 10:16:19.978661    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:16:21.612793    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:16:21.620808    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:21.620808    6700 main.go:141] libmachine: Creating VHD
	I0709 10:16:21.621045    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 10:16:25.332547    6700 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7D808E41-B9EE-446B-95C5-A2188640DBA0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 10:16:25.332716    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:25.333051    6700 main.go:141] libmachine: Writing magic tar header
	I0709 10:16:25.333119    6700 main.go:141] libmachine: Writing SSH key tar header
	I0709 10:16:25.344742    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 10:16:28.542978    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:28.555344    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:28.555344    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\disk.vhd' -SizeBytes 20000MB
	I0709 10:16:31.127986    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:31.127986    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:31.139453    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-400600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 10:16:34.712385    6700 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-400600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 10:16:34.712451    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:34.712451    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-400600 -DynamicMemoryEnabled $false
	I0709 10:16:36.921106    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:36.921389    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:36.921389    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-400600 -Count 2
	I0709 10:16:39.096299    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:39.096299    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:39.096497    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-400600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\boot2docker.iso'
	I0709 10:16:41.581345    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:41.594208    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:41.594208    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-400600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\disk.vhd'
	I0709 10:16:44.138028    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:44.138028    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:44.138028    6700 main.go:141] libmachine: Starting VM...
	I0709 10:16:44.149701    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-400600
	I0709 10:16:47.183394    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:47.183394    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:47.183394    6700 main.go:141] libmachine: Waiting for host to start...
	I0709 10:16:47.183394    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:16:49.462357    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:16:49.462409    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:49.462526    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:16:51.942938    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:51.942938    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:52.959645    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:16:55.140390    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:16:55.140390    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:55.150751    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:16:57.651076    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:57.651180    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:58.665138    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:00.845849    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:00.849421    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:00.849580    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:03.302240    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:17:03.309323    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:04.320576    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:06.563997    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:06.564385    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:06.564385    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:09.032272    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:17:09.043856    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:10.048655    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:12.202466    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:12.202466    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:12.213773    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:14.649305    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:14.649305    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:14.660701    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:16.666623    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:16.666623    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:16.680075    6700 machine.go:94] provisionDockerMachine start ...
	I0709 10:17:16.680196    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:18.718694    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:18.718694    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:18.723268    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:21.151956    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:21.151956    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:21.158615    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:21.166747    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:21.166747    6700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 10:17:21.309374    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 10:17:21.309446    6700 buildroot.go:166] provisioning hostname "ha-400600"
	I0709 10:17:21.309446    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:23.340531    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:23.340531    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:23.354344    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:25.778007    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:25.778007    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:25.783510    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:25.783956    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:25.783956    6700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-400600 && echo "ha-400600" | sudo tee /etc/hostname
	I0709 10:17:25.938090    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-400600
	
	I0709 10:17:25.938188    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:27.930864    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:27.930864    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:27.943013    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:30.379360    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:30.379360    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:30.385515    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:30.385515    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:30.385515    6700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-400600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-400600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-400600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 10:17:30.529380    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:17:30.529380    6700 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 10:17:30.529380    6700 buildroot.go:174] setting up certificates
	I0709 10:17:30.529380    6700 provision.go:84] configureAuth start
	I0709 10:17:30.529380    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:32.547259    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:32.558673    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:32.558805    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:34.975911    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:34.987287    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:34.987287    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:36.980820    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:36.980820    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:36.991091    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:39.523378    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:39.523378    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:39.534939    6700 provision.go:143] copyHostCerts
	I0709 10:17:39.535158    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 10:17:39.535537    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 10:17:39.535537    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 10:17:39.535902    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 10:17:39.537314    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 10:17:39.537461    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 10:17:39.537461    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 10:17:39.537995    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 10:17:39.538912    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 10:17:39.538912    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 10:17:39.539445    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 10:17:39.539901    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 10:17:39.541051    6700 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-400600 san=[127.0.0.1 172.18.204.161 ha-400600 localhost minikube]
	I0709 10:17:39.804727    6700 provision.go:177] copyRemoteCerts
	I0709 10:17:39.835159    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 10:17:39.835159    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:41.854879    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:41.866183    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:41.866506    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:44.227384    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:44.227384    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:44.241571    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:17:44.348653    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5134839s)
	I0709 10:17:44.348653    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 10:17:44.349576    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 10:17:44.391304    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 10:17:44.391502    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0709 10:17:44.434730    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 10:17:44.435311    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 10:17:44.477867    6700 provision.go:87] duration metric: took 13.9484564s to configureAuth
	I0709 10:17:44.478026    6700 buildroot.go:189] setting minikube options for container-runtime
	I0709 10:17:44.478981    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:17:44.479218    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:46.510643    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:46.510643    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:46.523587    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:48.971159    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:48.971159    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:48.988688    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:48.989278    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:48.989418    6700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 10:17:49.123099    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 10:17:49.123099    6700 buildroot.go:70] root file system type: tmpfs
	I0709 10:17:49.123441    6700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 10:17:49.123524    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:51.153888    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:51.165482    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:51.165482    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:53.534359    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:53.546598    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:53.552181    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:53.552933    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:53.552933    6700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 10:17:53.703587    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 10:17:53.703587    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:55.737929    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:55.738046    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:55.738046    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:58.144661    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:58.144661    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:58.150610    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:58.151350    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:58.151350    6700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 10:18:00.274769    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 10:18:00.274769    6700 machine.go:97] duration metric: took 43.5945976s to provisionDockerMachine
	I0709 10:18:00.274769    6700 client.go:171] duration metric: took 1m52.1621887s to LocalClient.Create
	I0709 10:18:00.274891    6700 start.go:167] duration metric: took 1m52.1623108s to libmachine.API.Create "ha-400600"
	I0709 10:18:00.274891    6700 start.go:293] postStartSetup for "ha-400600" (driver="hyperv")
	I0709 10:18:00.274976    6700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 10:18:00.285971    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 10:18:00.285971    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:02.341060    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:02.341060    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:02.341060    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:04.858604    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:04.858672    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:04.858672    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:18:04.978651    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6926698s)
	I0709 10:18:04.990287    6700 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 10:18:04.993635    6700 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 10:18:04.993635    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 10:18:04.999257    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 10:18:04.999579    6700 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 10:18:04.999579    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 10:18:05.016246    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 10:18:05.035000    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 10:18:05.080190    6700 start.go:296] duration metric: took 4.8052884s for postStartSetup
	I0709 10:18:05.083441    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:07.109669    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:07.109669    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:07.109870    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:09.511461    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:09.511461    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:09.511461    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:18:09.526873    6700 start.go:128] duration metric: took 2m1.4203894s to createHost
	I0709 10:18:09.526873    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:11.544559    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:11.555529    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:11.555529    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:13.940961    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:13.952666    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:13.958216    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:18:13.958216    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:18:13.958818    6700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 10:18:14.089176    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720545494.094996238
	
	I0709 10:18:14.089246    6700 fix.go:216] guest clock: 1720545494.094996238
	I0709 10:18:14.089246    6700 fix.go:229] Guest: 2024-07-09 10:18:14.094996238 -0700 PDT Remote: 2024-07-09 10:18:09.5268731 -0700 PDT m=+126.869214101 (delta=4.568123138s)
	I0709 10:18:14.089374    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:16.125196    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:16.125196    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:16.135644    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:18.543707    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:18.554588    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:18.560685    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:18:18.560902    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:18:18.560902    6700 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720545494
	I0709 10:18:18.699178    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 17:18:14 UTC 2024
	
	I0709 10:18:18.699178    6700 fix.go:236] clock set: Tue Jul  9 17:18:14 UTC 2024
	 (err=<nil>)
	I0709 10:18:18.699178    6700 start.go:83] releasing machines lock for "ha-400600", held for 2m10.5932749s
	I0709 10:18:18.699178    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:20.750622    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:20.762271    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:20.762271    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:23.245945    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:23.245945    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:23.261448    6700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 10:18:23.261599    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:23.270975    6700 ssh_runner.go:195] Run: cat /version.json
	I0709 10:18:23.270975    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:25.495129    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:25.495285    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:28.071630    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:28.071773    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:28.072033    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:18:28.083910    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:28.083910    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:28.088997    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:18:28.253218    6700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9916846s)
	I0709 10:18:28.254582    6700 ssh_runner.go:235] Completed: cat /version.json: (4.9835953s)
	I0709 10:18:28.267398    6700 ssh_runner.go:195] Run: systemctl --version
	I0709 10:18:28.287704    6700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0709 10:18:28.296624    6700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 10:18:28.308282    6700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 10:18:28.351681    6700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 10:18:28.351681    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:18:28.351681    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:18:28.401308    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 10:18:28.430815    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 10:18:28.452144    6700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 10:18:28.464240    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 10:18:28.498622    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:18:28.528662    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 10:18:28.563632    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:18:28.592490    6700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 10:18:28.625044    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 10:18:28.655962    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 10:18:28.686604    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 10:18:28.718208    6700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 10:18:28.748482    6700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 10:18:28.783522    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:28.968252    6700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 10:18:28.998812    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:18:29.013344    6700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 10:18:29.051240    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:18:29.082624    6700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 10:18:29.132080    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:18:29.164879    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:18:29.203809    6700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 10:18:29.265871    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:18:29.289323    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:18:29.332078    6700 ssh_runner.go:195] Run: which cri-dockerd
	I0709 10:18:29.351028    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 10:18:29.368258    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 10:18:29.410302    6700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 10:18:29.591968    6700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 10:18:29.769531    6700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 10:18:29.769797    6700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 10:18:29.820076    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:30.007600    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:18:32.584936    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.577331s)
	I0709 10:18:32.595944    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 10:18:32.632112    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:18:32.665353    6700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 10:18:32.862030    6700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 10:18:33.042564    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:33.236992    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 10:18:33.284318    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:18:33.318192    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:33.516188    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 10:18:33.616539    6700 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 10:18:33.628081    6700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 10:18:33.638821    6700 start.go:562] Will wait 60s for crictl version
	I0709 10:18:33.650086    6700 ssh_runner.go:195] Run: which crictl
	I0709 10:18:33.668004    6700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 10:18:33.720319    6700 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 10:18:33.730425    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:18:33.777123    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:18:33.808693    6700 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 10:18:33.808902    6700 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 10:18:33.815959    6700 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 10:18:33.815959    6700 ip.go:210] interface addr: 172.18.192.1/20
	I0709 10:18:33.822040    6700 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 10:18:33.828624    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:18:33.864539    6700 kubeadm.go:877] updating cluster {Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 10:18:33.864539    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:18:33.875617    6700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 10:18:33.897225    6700 docker.go:685] Got preloaded images: 
	I0709 10:18:33.897225    6700 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 10:18:33.909232    6700 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 10:18:33.940216    6700 ssh_runner.go:195] Run: which lz4
	I0709 10:18:33.946436    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 10:18:33.957436    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 10:18:33.965688    6700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 10:18:33.965907    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 10:18:36.339183    6700 docker.go:649] duration metric: took 2.3922949s to copy over tarball
	I0709 10:18:36.350129    6700 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 10:18:44.724750    6700 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3746026s)
	I0709 10:18:44.724863    6700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 10:18:44.804514    6700 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 10:18:44.827709    6700 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 10:18:44.869022    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:45.085069    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:18:48.726946    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6418689s)
	I0709 10:18:48.737009    6700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 10:18:48.768415    6700 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 10:18:48.768495    6700 cache_images.go:84] Images are preloaded, skipping loading
	I0709 10:18:48.768590    6700 kubeadm.go:928] updating node { 172.18.204.161 8443 v1.30.2 docker true true} ...
	I0709 10:18:48.768859    6700 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-400600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.204.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 10:18:48.778722    6700 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 10:18:48.810409    6700 cni.go:84] Creating CNI manager for ""
	I0709 10:18:48.810503    6700 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 10:18:48.810544    6700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 10:18:48.810595    6700 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.204.161 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-400600 NodeName:ha-400600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.204.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.204.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 10:18:48.810943    6700 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.204.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-400600"
	  kubeletExtraArgs:
	    node-ip: 172.18.204.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.204.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 10:18:48.811017    6700 kube-vip.go:115] generating kube-vip config ...
	I0709 10:18:48.823921    6700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0709 10:18:48.847812    6700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0709 10:18:48.848004    6700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0709 10:18:48.862161    6700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 10:18:48.879558    6700 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 10:18:48.891472    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0709 10:18:48.910116    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0709 10:18:48.940154    6700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 10:18:48.968982    6700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0709 10:18:48.998305    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0709 10:18:49.037911    6700 ssh_runner.go:195] Run: grep 172.18.207.254	control-plane.minikube.internal$ /etc/hosts
	I0709 10:18:49.046289    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:18:49.081649    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:49.265964    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:18:49.298848    6700 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600 for IP: 172.18.204.161
	I0709 10:18:49.298848    6700 certs.go:194] generating shared ca certs ...
	I0709 10:18:49.298967    6700 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.299532    6700 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 10:18:49.300389    6700 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 10:18:49.300576    6700 certs.go:256] generating profile certs ...
	I0709 10:18:49.301344    6700 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key
	I0709 10:18:49.301525    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.crt with IP's: []
	I0709 10:18:49.441961    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.crt ...
	I0709 10:18:49.441961    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.crt: {Name:mka7233808da0cc81632207b9cdb68c316f32895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.448722    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key ...
	I0709 10:18:49.448722    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key: {Name:mk92fb6d80beea0dec3e1f38459a29efbebff793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.450331    6700 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4
	I0709 10:18:49.450331    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.204.161 172.18.207.254]
	I0709 10:18:49.588257    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4 ...
	I0709 10:18:49.588257    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4: {Name:mkdeea7a9e8afe19683dfc98b89e22e9ca2d0712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.593630    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4 ...
	I0709 10:18:49.593630    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4: {Name:mk7249980be063f719f37f8a47747048fcd9bda7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.594856    6700 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt
	I0709 10:18:49.608854    6700 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key
	I0709 10:18:49.610432    6700 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key
	I0709 10:18:49.610584    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt with IP's: []
	I0709 10:18:49.837821    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt ...
	I0709 10:18:49.837821    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt: {Name:mke8951db5c0b1a6a0535481591e54fe9476f99c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.838328    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key ...
	I0709 10:18:49.838328    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key: {Name:mkdf05987e1446dc8d4c051f44a8aded138f8ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.839847    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 10:18:49.840869    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 10:18:49.841052    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 10:18:49.841263    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 10:18:49.841462    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 10:18:49.841644    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 10:18:49.841817    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 10:18:49.852866    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 10:18:49.853154    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 10:18:49.854671    6700 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 10:18:49.854671    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 10:18:49.854920    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 10:18:49.855467    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 10:18:49.855751    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 10:18:49.856146    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 10:18:49.856146    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 10:18:49.856909    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 10:18:49.856909    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:49.857562    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 10:18:49.904005    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 10:18:49.948445    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 10:18:49.996678    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 10:18:50.042295    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 10:18:50.084529    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 10:18:50.146108    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 10:18:50.196397    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 10:18:50.231703    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 10:18:50.280271    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 10:18:50.325896    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 10:18:50.368265    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 10:18:50.412784    6700 ssh_runner.go:195] Run: openssl version
	I0709 10:18:50.433529    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 10:18:50.466312    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:50.473337    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:50.486361    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:50.504540    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 10:18:50.537329    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 10:18:50.570308    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 10:18:50.573304    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:18:50.579113    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 10:18:50.607993    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 10:18:50.640829    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 10:18:50.672344    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 10:18:50.675802    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:18:50.690315    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 10:18:50.712157    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 10:18:50.743720    6700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:18:50.751665    6700 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 10:18:50.752123    6700 kubeadm.go:391] StartCluster: {Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:18:50.760948    6700 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 10:18:50.798428    6700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 10:18:50.829005    6700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 10:18:50.860662    6700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 10:18:50.874648    6700 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 10:18:50.874648    6700 kubeadm.go:156] found existing configuration files:
	
	I0709 10:18:50.892883    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 10:18:50.905920    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 10:18:50.918204    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 10:18:50.945478    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 10:18:50.959142    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 10:18:50.970041    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 10:18:50.997544    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 10:18:51.013191    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 10:18:51.030253    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 10:18:51.060115    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 10:18:51.078035    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 10:18:51.090874    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 10:18:51.107417    6700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 10:18:51.501898    6700 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 10:19:05.600789    6700 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 10:19:05.600789    6700 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 10:19:05.600789    6700 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 10:19:05.601338    6700 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 10:19:05.601664    6700 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 10:19:05.601895    6700 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 10:19:05.608187    6700 out.go:204]   - Generating certificates and keys ...
	I0709 10:19:05.608187    6700 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 10:19:05.608187    6700 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 10:19:05.609443    6700 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 10:19:05.609623    6700 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-400600 localhost] and IPs [172.18.204.161 127.0.0.1 ::1]
	I0709 10:19:05.609623    6700 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 10:19:05.610191    6700 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-400600 localhost] and IPs [172.18.204.161 127.0.0.1 ::1]
	I0709 10:19:05.610357    6700 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 10:19:05.610415    6700 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 10:19:05.610415    6700 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 10:19:05.611745    6700 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 10:19:05.611842    6700 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 10:19:05.615239    6700 out.go:204]   - Booting up control plane ...
	I0709 10:19:05.615377    6700 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 10:19:05.615377    6700 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 10:19:05.615377    6700 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 10:19:05.616215    6700 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 10:19:05.616409    6700 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0021856s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [api-check] The API server is healthy after 7.502453028s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 10:19:05.616409    6700 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 10:19:05.616409    6700 kubeadm.go:309] [mark-control-plane] Marking the node ha-400600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 10:19:05.616409    6700 kubeadm.go:309] [bootstrap-token] Using token: zh32lj.urxnr10p0ojd6j1h
	I0709 10:19:05.621477    6700 out.go:204]   - Configuring RBAC rules ...
	I0709 10:19:05.621948    6700 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 10:19:05.621948    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 10:19:05.622575    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 10:19:05.622828    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 10:19:05.622828    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 10:19:05.623497    6700 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 10:19:05.623632    6700 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 10:19:05.623632    6700 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 10:19:05.623632    6700 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 10:19:05.623632    6700 kubeadm.go:309] 
	I0709 10:19:05.623632    6700 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 10:19:05.623632    6700 kubeadm.go:309] 
	I0709 10:19:05.624340    6700 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 10:19:05.624340    6700 kubeadm.go:309] 
	I0709 10:19:05.624432    6700 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 10:19:05.624859    6700 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 10:19:05.625002    6700 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 10:19:05.625002    6700 kubeadm.go:309] 
	I0709 10:19:05.625198    6700 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 10:19:05.625259    6700 kubeadm.go:309] 
	I0709 10:19:05.625259    6700 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 10:19:05.625259    6700 kubeadm.go:309] 
	I0709 10:19:05.625259    6700 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 10:19:05.625259    6700 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 10:19:05.625846    6700 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 10:19:05.625846    6700 kubeadm.go:309] 
	I0709 10:19:05.626072    6700 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 10:19:05.626440    6700 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 10:19:05.626440    6700 kubeadm.go:309] 
	I0709 10:19:05.626778    6700 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zh32lj.urxnr10p0ojd6j1h \
	I0709 10:19:05.630822    6700 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 10:19:05.630822    6700 kubeadm.go:309] 	--control-plane 
	I0709 10:19:05.630822    6700 kubeadm.go:309] 
	I0709 10:19:05.630822    6700 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 10:19:05.630822    6700 kubeadm.go:309] 
	I0709 10:19:05.631527    6700 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zh32lj.urxnr10p0ojd6j1h \
	I0709 10:19:05.631739    6700 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 10:19:05.631872    6700 cni.go:84] Creating CNI manager for ""
	I0709 10:19:05.631872    6700 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 10:19:05.634892    6700 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 10:19:05.648230    6700 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 10:19:05.659012    6700 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 10:19:05.659067    6700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 10:19:05.707277    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 10:19:06.378276    6700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 10:19:06.392530    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-400600 minikube.k8s.io/updated_at=2024_07_09T10_19_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=ha-400600 minikube.k8s.io/primary=true
	I0709 10:19:06.392530    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:06.410536    6700 ops.go:34] apiserver oom_adj: -16
	I0709 10:19:06.571931    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:07.073723    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:07.582109    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:08.083450    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:08.588699    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:09.085619    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:09.574693    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:10.076173    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:10.580430    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:11.084027    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:11.589064    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:12.075414    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:12.583098    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:13.094890    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:13.584700    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:14.077349    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:14.583923    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:15.083743    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:15.582350    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:16.080505    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:16.583413    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:17.075784    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:17.577060    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:18.081648    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:18.583111    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:19.082266    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:19.211669    6700 kubeadm.go:1107] duration metric: took 12.8331464s to wait for elevateKubeSystemPrivileges
	W0709 10:19:19.211767    6700 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 10:19:19.211836    6700 kubeadm.go:393] duration metric: took 28.4595816s to StartCluster
	I0709 10:19:19.211836    6700 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:19:19.212088    6700 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:19:19.214037    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:19:19.215560    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 10:19:19.215623    6700 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:19:19.215623    6700 start.go:240] waiting for startup goroutines ...
	I0709 10:19:19.215623    6700 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 10:19:19.215623    6700 addons.go:69] Setting storage-provisioner=true in profile "ha-400600"
	I0709 10:19:19.215623    6700 addons.go:69] Setting default-storageclass=true in profile "ha-400600"
	I0709 10:19:19.215623    6700 addons.go:234] Setting addon storage-provisioner=true in "ha-400600"
	I0709 10:19:19.215623    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:19:19.216231    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:19:19.215623    6700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-400600"
	I0709 10:19:19.217293    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:19.217897    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:19.389113    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 10:19:19.803341    6700 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 10:19:21.526158    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:21.526158    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:21.526158    6700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:19:21.529180    6700 kapi.go:59] client config for ha-400600: &rest.Config{Host:"https://172.18.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 10:19:21.531019    6700 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 10:19:21.531395    6700 addons.go:234] Setting addon default-storageclass=true in "ha-400600"
	I0709 10:19:21.531510    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:19:21.532672    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:21.539846    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:21.539909    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:21.543245    6700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 10:19:21.546095    6700 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 10:19:21.546095    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 10:19:21.546095    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:23.834220    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:23.834220    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:23.834220    6700 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 10:19:23.834342    6700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 10:19:23.834413    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:23.836716    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:23.836794    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:23.836869    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:19:26.073006    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:26.088033    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:26.088033    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:19:26.544598    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:19:26.544598    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:26.544598    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:19:26.682943    6700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 10:19:28.672371    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:19:28.678957    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:28.678957    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:19:28.810737    6700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 10:19:29.045903    6700 round_trippers.go:463] GET https://172.18.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 10:19:29.045971    6700 round_trippers.go:469] Request Headers:
	I0709 10:19:29.045971    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:19:29.046027    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:19:29.072173    6700 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0709 10:19:29.073170    6700 round_trippers.go:463] PUT https://172.18.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 10:19:29.073170    6700 round_trippers.go:469] Request Headers:
	I0709 10:19:29.073170    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:19:29.073170    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:19:29.073170    6700 round_trippers.go:473]     Content-Type: application/json
	I0709 10:19:29.073765    6700 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:19:29.081778    6700 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 10:19:29.087882    6700 addons.go:510] duration metric: took 9.8722371s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 10:19:29.087882    6700 start.go:245] waiting for cluster config update ...
	I0709 10:19:29.087882    6700 start.go:254] writing updated cluster config ...
	I0709 10:19:29.093827    6700 out.go:177] 
	I0709 10:19:29.103899    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:19:29.103899    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:19:29.110903    6700 out.go:177] * Starting "ha-400600-m02" control-plane node in "ha-400600" cluster
	I0709 10:19:29.117086    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:19:29.117086    6700 cache.go:56] Caching tarball of preloaded images
	I0709 10:19:29.117641    6700 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 10:19:29.117887    6700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 10:19:29.118190    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:19:29.119510    6700 start.go:360] acquireMachinesLock for ha-400600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 10:19:29.121096    6700 start.go:364] duration metric: took 1.5865ms to acquireMachinesLock for "ha-400600-m02"
	I0709 10:19:29.121277    6700 start.go:93] Provisioning new machine with config: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:19:29.121277    6700 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 10:19:29.123470    6700 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 10:19:29.128869    6700 start.go:159] libmachine.API.Create for "ha-400600" (driver="hyperv")
	I0709 10:19:29.128869    6700 client.go:168] LocalClient.Create starting
	I0709 10:19:29.129128    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 10:19:29.129825    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:19:29.129825    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:19:29.130039    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 10:19:29.130258    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:19:29.130258    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:19:29.130490    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 10:19:31.048681    6700 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 10:19:31.048681    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:31.048681    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 10:19:32.828860    6700 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 10:19:32.828860    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:32.829203    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:19:34.333254    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:19:34.333398    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:34.333398    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:19:37.989649    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:19:37.989917    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:37.994114    6700 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 10:19:38.517380    6700 main.go:141] libmachine: Creating SSH key...
	I0709 10:19:38.712054    6700 main.go:141] libmachine: Creating VM...
	I0709 10:19:38.712054    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:19:41.608194    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:19:41.609105    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:41.609216    6700 main.go:141] libmachine: Using switch "Default Switch"
	I0709 10:19:41.609216    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:19:43.374004    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:19:43.374004    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:43.374004    6700 main.go:141] libmachine: Creating VHD
	I0709 10:19:43.374219    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 10:19:47.211864    6700 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4ECC2E3E-23F6-44BF-8AA9-605DE177D552
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 10:19:47.211864    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:47.211864    6700 main.go:141] libmachine: Writing magic tar header
	I0709 10:19:47.211990    6700 main.go:141] libmachine: Writing SSH key tar header
	I0709 10:19:47.222156    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 10:19:50.456678    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:19:50.456678    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:50.456678    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\disk.vhd' -SizeBytes 20000MB
	I0709 10:19:52.979059    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:19:52.979059    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:52.979940    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-400600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 10:19:56.661078    6700 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-400600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 10:19:56.661078    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:56.661870    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-400600-m02 -DynamicMemoryEnabled $false
	I0709 10:19:58.913011    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:19:58.913011    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:58.913011    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-400600-m02 -Count 2
	I0709 10:20:01.123871    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:01.123871    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:01.123871    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-400600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\boot2docker.iso'
	I0709 10:20:03.737775    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:03.737775    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:03.738634    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-400600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\disk.vhd'
	I0709 10:20:06.453581    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:06.453976    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:06.453976    6700 main.go:141] libmachine: Starting VM...
	I0709 10:20:06.453976    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-400600-m02
	I0709 10:20:09.542044    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:09.542044    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:09.542610    6700 main.go:141] libmachine: Waiting for host to start...
	I0709 10:20:09.542610    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:11.856562    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:11.857260    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:11.857260    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:14.430002    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:14.431142    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:15.432860    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:17.684806    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:17.684806    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:17.684806    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:20.300020    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:20.300020    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:21.304371    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:23.609730    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:23.609730    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:23.609730    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:26.183209    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:26.183270    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:27.183572    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:29.418240    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:29.418240    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:29.418240    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:31.966591    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:31.966591    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:32.971572    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:35.224209    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:35.224209    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:35.224680    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:37.890347    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:37.891029    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:37.891123    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:40.062862    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:40.063532    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:40.063605    6700 machine.go:94] provisionDockerMachine start ...
	I0709 10:20:40.063605    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:42.232343    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:42.232343    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:42.232436    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:44.786215    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:44.786267    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:44.791325    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:20:44.803070    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:20:44.803070    6700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 10:20:44.931955    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 10:20:44.932066    6700 buildroot.go:166] provisioning hostname "ha-400600-m02"
	I0709 10:20:44.932066    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:47.129679    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:47.130081    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:47.130081    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:49.749823    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:49.750347    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:49.755721    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:20:49.757091    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:20:49.757091    6700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-400600-m02 && echo "ha-400600-m02" | sudo tee /etc/hostname
	I0709 10:20:49.912990    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-400600-m02
	
	I0709 10:20:49.912990    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:52.136717    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:52.136717    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:52.136885    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:54.710906    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:54.710906    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:54.717068    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:20:54.717648    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:20:54.717743    6700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-400600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-400600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-400600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 10:20:54.862361    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:20:54.862361    6700 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 10:20:54.862361    6700 buildroot.go:174] setting up certificates
	I0709 10:20:54.862361    6700 provision.go:84] configureAuth start
	I0709 10:20:54.862361    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:57.004888    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:57.004888    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:57.004888    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:59.574239    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:59.574239    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:59.575124    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:01.706855    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:01.706855    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:01.706963    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:04.264323    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:04.264323    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:04.264323    6700 provision.go:143] copyHostCerts
	I0709 10:21:04.264615    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 10:21:04.264997    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 10:21:04.264997    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 10:21:04.264997    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 10:21:04.266186    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 10:21:04.266186    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 10:21:04.266186    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 10:21:04.266186    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 10:21:04.266186    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 10:21:04.266186    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 10:21:04.266186    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 10:21:04.266186    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 10:21:04.266186    6700 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-400600-m02 san=[127.0.0.1 172.18.194.29 ha-400600-m02 localhost minikube]
	I0709 10:21:04.924276    6700 provision.go:177] copyRemoteCerts
	I0709 10:21:04.937812    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 10:21:04.937812    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:07.111725    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:07.112064    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:07.112064    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:09.707523    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:09.708076    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:09.708157    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:09.811548    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.873725s)
	I0709 10:21:09.811548    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 10:21:09.812494    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 10:21:09.859280    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 10:21:09.859280    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0709 10:21:09.907184    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 10:21:09.907405    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 10:21:09.953082    6700 provision.go:87] duration metric: took 15.0906879s to configureAuth
	I0709 10:21:09.953082    6700 buildroot.go:189] setting minikube options for container-runtime
	I0709 10:21:09.953690    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:21:09.954274    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:12.117069    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:12.117069    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:12.117069    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:14.698815    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:14.698815    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:14.706424    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:14.706592    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:14.706592    6700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 10:21:14.829911    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 10:21:14.829911    6700 buildroot.go:70] root file system type: tmpfs
	I0709 10:21:14.829911    6700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 10:21:14.829911    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:16.981791    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:16.982243    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:16.982243    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:19.580394    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:19.581453    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:19.587354    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:19.587567    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:19.587567    6700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.204.161"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 10:21:19.738959    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.204.161
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 10:21:19.738959    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:21.889937    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:21.889937    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:21.890512    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:24.487444    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:24.488476    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:24.494624    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:24.495266    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:24.495266    6700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 10:21:26.694360    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 10:21:26.694360    6700 machine.go:97] duration metric: took 46.6306509s to provisionDockerMachine
	I0709 10:21:26.694360    6700 client.go:171] duration metric: took 1m57.5652308s to LocalClient.Create
	I0709 10:21:26.694360    6700 start.go:167] duration metric: took 1m57.5652308s to libmachine.API.Create "ha-400600"
	I0709 10:21:26.694360    6700 start.go:293] postStartSetup for "ha-400600-m02" (driver="hyperv")
	I0709 10:21:26.694360    6700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 10:21:26.706639    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 10:21:26.706639    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:28.871632    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:28.871632    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:28.872484    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:31.433006    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:31.433006    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:31.433891    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:31.550775    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8440932s)
	I0709 10:21:31.563157    6700 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 10:21:31.570427    6700 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 10:21:31.570427    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 10:21:31.570962    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 10:21:31.572099    6700 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 10:21:31.572099    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 10:21:31.585416    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 10:21:31.603788    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 10:21:31.649410    6700 start.go:296] duration metric: took 4.9550386s for postStartSetup
	I0709 10:21:31.652291    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:33.893843    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:33.893843    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:33.893843    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:36.488354    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:36.488354    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:36.488916    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:21:36.491413    6700 start.go:128] duration metric: took 2m7.3698531s to createHost
	I0709 10:21:36.491413    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:38.712894    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:38.712988    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:38.713072    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:41.277455    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:41.277455    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:41.284279    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:41.284848    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:41.284848    6700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 10:21:41.406238    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720545701.413402406
	
	I0709 10:21:41.406238    6700 fix.go:216] guest clock: 1720545701.413402406
	I0709 10:21:41.406300    6700 fix.go:229] Guest: 2024-07-09 10:21:41.413402406 -0700 PDT Remote: 2024-07-09 10:21:36.4914138 -0700 PDT m=+333.833296901 (delta=4.921988606s)
	I0709 10:21:41.406379    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:43.597896    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:43.597896    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:43.597896    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:46.216390    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:46.216390    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:46.223023    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:46.223436    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:46.223436    6700 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720545701
	I0709 10:21:46.367188    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 17:21:41 UTC 2024
	
	I0709 10:21:46.367188    6700 fix.go:236] clock set: Tue Jul  9 17:21:41 UTC 2024
	 (err=<nil>)
	I0709 10:21:46.367188    6700 start.go:83] releasing machines lock for "ha-400600-m02", held for 2m17.2457859s
	I0709 10:21:46.367188    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:48.570250    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:48.570969    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:48.570969    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:51.205915    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:51.205915    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:51.210294    6700 out.go:177] * Found network options:
	I0709 10:21:51.213256    6700 out.go:177]   - NO_PROXY=172.18.204.161
	W0709 10:21:51.216704    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:21:51.219291    6700 out.go:177]   - NO_PROXY=172.18.204.161
	W0709 10:21:51.221600    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:21:51.221967    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:21:51.224998    6700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 10:21:51.224998    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:51.235003    6700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 10:21:51.235003    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:53.530257    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:53.530257    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:56.324171    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:56.324171    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:56.324171    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:56.348330    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:56.348330    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:56.348330    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:56.414093    6700 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1790781s)
	W0709 10:21:56.414211    6700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 10:21:56.426950    6700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 10:21:56.506247    6700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 10:21:56.506247    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:21:56.506247    6700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2812367s)
	I0709 10:21:56.506417    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:21:56.553989    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 10:21:56.584947    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 10:21:56.605999    6700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 10:21:56.617667    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 10:21:56.647545    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:21:56.677537    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 10:21:56.708864    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:21:56.740788    6700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 10:21:56.773035    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 10:21:56.801875    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 10:21:56.831894    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 10:21:56.863828    6700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 10:21:56.893732    6700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 10:21:56.923280    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:21:57.122823    6700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 10:21:57.168186    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:21:57.180708    6700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 10:21:57.224807    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:21:57.261180    6700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 10:21:57.308495    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:21:57.344493    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:21:57.383282    6700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 10:21:57.447602    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:21:57.472052    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:21:57.516564    6700 ssh_runner.go:195] Run: which cri-dockerd
	I0709 10:21:57.534258    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 10:21:57.551834    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 10:21:57.599321    6700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 10:21:57.813433    6700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 10:21:58.006045    6700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 10:21:58.006045    6700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 10:21:58.066905    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:21:58.258571    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:22:00.839941    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5813642s)
	I0709 10:22:00.852627    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 10:22:00.893278    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:22:00.927214    6700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 10:22:01.128640    6700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 10:22:01.322499    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:22:01.520908    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 10:22:01.565905    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:22:01.602197    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:22:01.803855    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 10:22:01.913637    6700 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 10:22:01.925742    6700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 10:22:01.936387    6700 start.go:562] Will wait 60s for crictl version
	I0709 10:22:01.948665    6700 ssh_runner.go:195] Run: which crictl
	I0709 10:22:01.966862    6700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 10:22:02.028478    6700 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 10:22:02.038922    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:22:02.087983    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:22:02.128717    6700 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 10:22:02.132672    6700 out.go:177]   - env NO_PROXY=172.18.204.161
	I0709 10:22:02.134736    6700 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 10:22:02.141674    6700 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 10:22:02.142722    6700 ip.go:210] interface addr: 172.18.192.1/20
	I0709 10:22:02.152660    6700 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 10:22:02.158612    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:22:02.180266    6700 mustload.go:65] Loading cluster: ha-400600
	I0709 10:22:02.181018    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:22:02.182015    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:22:04.362014    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:22:04.362014    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:04.362014    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:22:04.363662    6700 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600 for IP: 172.18.194.29
	I0709 10:22:04.363662    6700 certs.go:194] generating shared ca certs ...
	I0709 10:22:04.363935    6700 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:22:04.364556    6700 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 10:22:04.365161    6700 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 10:22:04.365430    6700 certs.go:256] generating profile certs ...
	I0709 10:22:04.365790    6700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key
	I0709 10:22:04.365790    6700 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077
	I0709 10:22:04.366425    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.204.161 172.18.194.29 172.18.207.254]
	I0709 10:22:04.551536    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077 ...
	I0709 10:22:04.551536    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077: {Name:mk4a51d16faaa4f23e66052e6592db0df7d43bee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:22:04.552956    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077 ...
	I0709 10:22:04.552956    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077: {Name:mkb98654f3a8d12070f23724cefc35befb1c4352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:22:04.554457    6700 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt
	I0709 10:22:04.566153    6700 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key
	I0709 10:22:04.567898    6700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 10:22:04.569195    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 10:22:04.569576    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 10:22:04.569886    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 10:22:04.569886    6700 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 10:22:04.569886    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 10:22:04.570893    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 10:22:04.571159    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 10:22:04.571159    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 10:22:04.572167    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 10:22:04.572278    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 10:22:04.572278    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 10:22:04.572278    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:04.573020    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:22:06.790104    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:22:06.790104    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:06.790557    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:22:09.444085    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:22:09.444272    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:09.444333    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:22:09.553785    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0709 10:22:09.562857    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0709 10:22:09.599367    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0709 10:22:09.606882    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0709 10:22:09.640199    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0709 10:22:09.646942    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0709 10:22:09.680015    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0709 10:22:09.688196    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0709 10:22:09.720820    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0709 10:22:09.727704    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0709 10:22:09.762645    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0709 10:22:09.768667    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0709 10:22:09.789341    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 10:22:09.837953    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 10:22:09.887405    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 10:22:09.934516    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 10:22:09.985492    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0709 10:22:10.033068    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 10:22:10.086319    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 10:22:10.136019    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 10:22:10.182147    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 10:22:10.228981    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 10:22:10.276301    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 10:22:10.323257    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0709 10:22:10.357444    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0709 10:22:10.391158    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0709 10:22:10.424803    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0709 10:22:10.457769    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0709 10:22:10.488572    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0709 10:22:10.520226    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0709 10:22:10.566902    6700 ssh_runner.go:195] Run: openssl version
	I0709 10:22:10.590073    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 10:22:10.623913    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 10:22:10.631657    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:22:10.645423    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 10:22:10.669538    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 10:22:10.703208    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 10:22:10.734343    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 10:22:10.741731    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:22:10.753732    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 10:22:10.776554    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 10:22:10.808266    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 10:22:10.841624    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:10.848450    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:10.861449    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:10.882644    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 10:22:10.914774    6700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:22:10.921346    6700 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 10:22:10.921346    6700 kubeadm.go:928] updating node {m02 172.18.194.29 8443 v1.30.2 docker true true} ...
	I0709 10:22:10.921962    6700 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-400600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.194.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 10:22:10.921962    6700 kube-vip.go:115] generating kube-vip config ...
	I0709 10:22:10.933625    6700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0709 10:22:10.959172    6700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0709 10:22:10.960659    6700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0709 10:22:10.971620    6700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 10:22:10.987878    6700 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0709 10:22:10.999677    6700 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0709 10:22:11.022406    6700 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet
	I0709 10:22:11.023027    6700 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl
	I0709 10:22:11.023087    6700 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm
	I0709 10:22:12.077078    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:22:12.089302    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:22:12.097821    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0709 10:22:12.098037    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0709 10:22:12.167467    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:22:12.172966    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:22:12.191910    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0709 10:22:12.191910    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0709 10:22:12.475387    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:22:12.558504    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:22:12.584056    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:22:12.601073    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0709 10:22:12.602051    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0709 10:22:13.555164    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0709 10:22:13.575057    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0709 10:22:13.607641    6700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 10:22:13.642222    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0709 10:22:13.689225    6700 ssh_runner.go:195] Run: grep 172.18.207.254	control-plane.minikube.internal$ /etc/hosts
	I0709 10:22:13.695919    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:22:13.731217    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:22:13.950355    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:22:13.982022    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:22:13.983077    6700 start.go:316] joinCluster: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:22:13.983326    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0709 10:22:13.983403    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:22:16.195346    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:22:16.195346    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:16.195346    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:22:18.827529    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:22:18.827529    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:18.828078    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:22:19.037769    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0543976s)
	I0709 10:22:19.037882    6700 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:22:19.037882    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hidln1.r2nzqumybz2oot2d --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m02 --control-plane --apiserver-advertise-address=172.18.194.29 --apiserver-bind-port=8443"
	I0709 10:23:05.800803    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hidln1.r2nzqumybz2oot2d --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m02 --control-plane --apiserver-advertise-address=172.18.194.29 --apiserver-bind-port=8443": (46.7627236s)
	I0709 10:23:05.800961    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0709 10:23:06.621841    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-400600-m02 minikube.k8s.io/updated_at=2024_07_09T10_23_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=ha-400600 minikube.k8s.io/primary=false
	I0709 10:23:06.802867    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-400600-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0709 10:23:06.949058    6700 start.go:318] duration metric: took 52.9658596s to joinCluster
	I0709 10:23:06.949215    6700 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:23:06.949773    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:23:06.951938    6700 out.go:177] * Verifying Kubernetes components...
	I0709 10:23:06.967361    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:23:07.305940    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:23:07.338767    6700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:23:07.339647    6700 kapi.go:59] client config for ha-400600: &rest.Config{Host:"https://172.18.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0709 10:23:07.339871    6700 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.207.254:8443 with https://172.18.204.161:8443
	I0709 10:23:07.340984    6700 node_ready.go:35] waiting up to 6m0s for node "ha-400600-m02" to be "Ready" ...
	I0709 10:23:07.341187    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:07.341187    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:07.341187    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:07.341247    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:07.377627    6700 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0709 10:23:07.842079    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:07.842410    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:07.842410    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:07.842410    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:07.889542    6700 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0709 10:23:08.349595    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:08.349595    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:08.349595    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:08.349595    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:08.356332    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:08.856185    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:08.856185    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:08.856185    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:08.856185    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:08.872490    6700 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 10:23:09.342634    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:09.342634    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:09.342634    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:09.342634    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:09.352365    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:23:09.354049    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:09.849205    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:09.849434    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:09.849434    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:09.849560    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:09.855905    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:10.354979    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:10.355194    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:10.355194    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:10.355194    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:10.362058    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:10.848162    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:10.848162    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:10.848162    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:10.848162    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:10.858733    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:23:11.353740    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:11.353740    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:11.353740    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:11.353740    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:11.357194    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:11.358553    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:11.844023    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:11.844314    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:11.844314    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:11.844314    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:11.852650    6700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 10:23:12.355129    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:12.355129    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:12.355192    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:12.355211    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:12.360126    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:12.854064    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:12.854157    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:12.854157    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:12.854157    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:12.860757    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:13.352389    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:13.352389    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:13.352389    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:13.352389    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:13.358956    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:13.359834    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:13.847049    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:13.847049    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:13.847049    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:13.847049    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:13.850636    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:14.355976    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:14.355976    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:14.355976    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:14.355976    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:14.361341    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:14.843112    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:14.843112    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:14.843522    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:14.843522    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:14.848926    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:15.346436    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:15.346436    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:15.346436    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:15.346436    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:15.351012    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:15.848171    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:15.848171    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:15.848171    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:15.848171    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:15.853410    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:15.854875    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:16.345436    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:16.345436    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:16.345436    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:16.345436    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:16.352975    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:16.846429    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:16.846669    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:16.846669    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:16.846669    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:16.851042    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.351331    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:17.351538    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.351538    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.351538    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.356727    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:17.358134    6700 node_ready.go:49] node "ha-400600-m02" has status "Ready":"True"
	I0709 10:23:17.358134    6700 node_ready.go:38] duration metric: took 10.0170375s for node "ha-400600-m02" to be "Ready" ...
	I0709 10:23:17.358134    6700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:23:17.358134    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:17.358134    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.358134    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.358134    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.368322    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:23:17.377659    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.377659    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zbxnq
	I0709 10:23:17.377659    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.377659    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.377659    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.381265    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:17.382578    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:17.382578    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.382702    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.382702    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.387438    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.388677    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:17.388741    6700 pod_ready.go:81] duration metric: took 11.0815ms for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.388741    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.388805    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zst2x
	I0709 10:23:17.388876    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.388876    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.388876    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.392891    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.393903    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:17.393992    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.393992    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.393992    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.396951    6700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:23:17.398566    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:17.398566    6700 pod_ready.go:81] duration metric: took 9.8248ms for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.398566    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.398755    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600
	I0709 10:23:17.398755    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.398755    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.398755    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.402069    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:17.403190    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:17.403190    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.403190    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.403190    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.407867    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.409038    6700 pod_ready.go:92] pod "etcd-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:17.409038    6700 pod_ready.go:81] duration metric: took 10.4724ms for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.409038    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.409184    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:17.409184    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.409184    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.409291    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.413351    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.414206    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:17.414206    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.414206    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.414206    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.419020    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.919304    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:17.919304    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.919304    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.919304    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.925841    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:17.927630    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:17.927630    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.927630    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.927727    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.935434    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:18.423205    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:18.423268    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.423268    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.423268    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.431136    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:18.432688    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:18.432784    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.432784    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.432784    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.436590    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:18.910863    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:18.910962    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.910962    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.910962    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.915099    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:18.916714    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:18.916744    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.916939    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.916980    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.921097    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:19.425016    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:19.425016    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.425016    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.425016    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.430790    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:19.431701    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:19.431701    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.431796    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.431796    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.436835    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:19.437559    6700 pod_ready.go:102] pod "etcd-ha-400600-m02" in "kube-system" namespace has status "Ready":"False"
	I0709 10:23:19.912987    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:19.912987    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.912987    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.913390    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.919716    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:19.920766    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:19.920856    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.920856    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.920856    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.925120    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:20.411212    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:20.411212    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.411212    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.411212    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.414530    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:20.416025    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:20.416025    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.416025    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.416025    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.420684    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:20.914024    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:20.914024    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.914024    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.914024    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.919463    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:20.921045    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:20.921045    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.921045    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.921045    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.944236    6700 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0709 10:23:21.415444    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:21.415534    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.415534    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.415534    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.419901    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.421583    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:21.421644    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.421644    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.421644    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.424902    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:21.425894    6700 pod_ready.go:92] pod "etcd-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.425894    6700 pod_ready.go:81] duration metric: took 4.0168465s for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.425894    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.425894    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600
	I0709 10:23:21.425894    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.425894    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.425894    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.430644    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.431406    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:21.431406    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.431406    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.431406    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.460867    6700 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0709 10:23:21.461601    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.461658    6700 pod_ready.go:81] duration metric: took 35.7068ms for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.461658    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.461894    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m02
	I0709 10:23:21.461894    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.461894    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.461894    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.470987    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:23:21.472429    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:21.472429    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.472542    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.472542    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.476994    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.477737    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.477737    6700 pod_ready.go:81] duration metric: took 16.0797ms for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.477737    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.555755    6700 request.go:629] Waited for 77.7485ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:23:21.555860    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:23:21.555860    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.555976    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.555976    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.560689    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.759551    6700 request.go:629] Waited for 196.9091ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:21.759666    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:21.759666    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.759666    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.759666    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.765123    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:21.766468    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.766533    6700 pod_ready.go:81] duration metric: took 288.7185ms for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.766533    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.962277    6700 request.go:629] Waited for 195.2609ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:23:21.962497    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:23:21.962497    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.962561    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.962561    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.968323    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:22.165651    6700 request.go:629] Waited for 195.7957ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.165651    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.165651    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.165651    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.165651    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.171320    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:22.171723    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:22.172636    6700 pod_ready.go:81] duration metric: took 406.1022ms for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.172705    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.352412    6700 request.go:629] Waited for 179.4472ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:23:22.352787    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:23:22.352787    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.352787    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.352787    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.358622    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:22.555850    6700 request.go:629] Waited for 195.6488ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:22.555850    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:22.555850    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.555850    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.555850    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.561655    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:22.562527    6700 pod_ready.go:92] pod "kube-proxy-7k7w8" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:22.562582    6700 pod_ready.go:81] duration metric: took 389.8759ms for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.562582    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.758231    6700 request.go:629] Waited for 195.4993ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:23:22.758472    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:23:22.758472    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.758548    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.758548    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.766564    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:22.960713    6700 request.go:629] Waited for 193.6089ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.961027    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.961027    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.961027    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.961027    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.965557    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:22.967338    6700 pod_ready.go:92] pod "kube-proxy-djlzm" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:22.967338    6700 pod_ready.go:81] duration metric: took 404.7546ms for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.967427    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.167134    6700 request.go:629] Waited for 199.6376ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:23:23.167134    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:23:23.167134    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.167134    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.167134    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.171406    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:23.356491    6700 request.go:629] Waited for 183.3604ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:23.356740    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:23.356792    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.356792    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.356792    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.361308    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:23.362948    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:23.363024    6700 pod_ready.go:81] duration metric: took 395.5961ms for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.363024    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.559171    6700 request.go:629] Waited for 195.9164ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:23:23.559364    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:23:23.559466    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.559466    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.559466    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.564861    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:23.765887    6700 request.go:629] Waited for 199.6772ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:23.765887    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:23.765887    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.765887    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.765887    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.771580    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:23.772886    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:23.773053    6700 pod_ready.go:81] duration metric: took 410.0278ms for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.773053    6700 pod_ready.go:38] duration metric: took 6.414904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:23:23.773179    6700 api_server.go:52] waiting for apiserver process to appear ...
	I0709 10:23:23.785039    6700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:23:23.814980    6700 api_server.go:72] duration metric: took 16.8655931s to wait for apiserver process to appear ...
	I0709 10:23:23.815059    6700 api_server.go:88] waiting for apiserver healthz status ...
	I0709 10:23:23.815059    6700 api_server.go:253] Checking apiserver healthz at https://172.18.204.161:8443/healthz ...
	I0709 10:23:23.822770    6700 api_server.go:279] https://172.18.204.161:8443/healthz returned 200:
	ok
	I0709 10:23:23.823200    6700 round_trippers.go:463] GET https://172.18.204.161:8443/version
	I0709 10:23:23.823261    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.823355    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.823386    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.824546    6700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:23:23.825444    6700 api_server.go:141] control plane version: v1.30.2
	I0709 10:23:23.825584    6700 api_server.go:131] duration metric: took 10.525ms to wait for apiserver health ...
	I0709 10:23:23.825662    6700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 10:23:23.953771    6700 request.go:629] Waited for 127.9182ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:23.953877    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:23.953877    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.953877    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.953877    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.961114    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:23.968078    6700 system_pods.go:59] 17 kube-system pods found
	I0709 10:23:23.968078    6700 system_pods.go:61] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:23:23.968078    6700 system_pods.go:74] duration metric: took 142.4164ms to wait for pod list to return data ...
	I0709 10:23:23.968078    6700 default_sa.go:34] waiting for default service account to be created ...
	I0709 10:23:24.156616    6700 request.go:629] Waited for 187.7152ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:23:24.156616    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:23:24.156616    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:24.156616    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:24.156616    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:24.161662    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:24.162756    6700 default_sa.go:45] found service account: "default"
	I0709 10:23:24.162835    6700 default_sa.go:55] duration metric: took 194.756ms for default service account to be created ...
	I0709 10:23:24.162835    6700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 10:23:24.358541    6700 request.go:629] Waited for 195.4367ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:24.358541    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:24.358763    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:24.358763    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:24.358763    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:24.366508    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:24.374700    6700 system_pods.go:86] 17 kube-system pods found
	I0709 10:23:24.374700    6700 system_pods.go:89] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:23:24.374700    6700 system_pods.go:126] duration metric: took 211.8649ms to wait for k8s-apps to be running ...
	I0709 10:23:24.374700    6700 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 10:23:24.392548    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:23:24.427525    6700 system_svc.go:56] duration metric: took 52.8246ms WaitForService to wait for kubelet
	I0709 10:23:24.427525    6700 kubeadm.go:576] duration metric: took 17.4781365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 10:23:24.427525    6700 node_conditions.go:102] verifying NodePressure condition ...
	I0709 10:23:24.563935    6700 request.go:629] Waited for 135.4025ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes
	I0709 10:23:24.564226    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes
	I0709 10:23:24.564368    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:24.564390    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:24.564390    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:24.570220    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:24.571256    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:23:24.571256    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:23:24.571256    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:23:24.571256    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:23:24.571256    6700 node_conditions.go:105] duration metric: took 142.7233ms to run NodePressure ...
	I0709 10:23:24.571256    6700 start.go:240] waiting for startup goroutines ...
	I0709 10:23:24.571256    6700 start.go:254] writing updated cluster config ...
	I0709 10:23:24.575293    6700 out.go:177] 
	I0709 10:23:24.589380    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:23:24.589973    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:23:24.597327    6700 out.go:177] * Starting "ha-400600-m03" control-plane node in "ha-400600" cluster
	I0709 10:23:24.599694    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:23:24.599694    6700 cache.go:56] Caching tarball of preloaded images
	I0709 10:23:24.599694    6700 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 10:23:24.600239    6700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 10:23:24.600499    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:23:24.604823    6700 start.go:360] acquireMachinesLock for ha-400600-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 10:23:24.604823    6700 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-400600-m03"
	I0709 10:23:24.605203    6700 start.go:93] Provisioning new machine with config: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:23:24.605203    6700 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0709 10:23:24.607715    6700 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 10:23:24.608882    6700 start.go:159] libmachine.API.Create for "ha-400600" (driver="hyperv")
	I0709 10:23:24.608984    6700 client.go:168] LocalClient.Create starting
	I0709 10:23:24.609347    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 10:23:26.511464    6700 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 10:23:26.511464    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:26.511464    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 10:23:28.245149    6700 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 10:23:28.245185    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:28.245185    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:23:29.757893    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:23:29.758721    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:29.758721    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:23:33.542887    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:23:33.542887    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:33.544929    6700 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 10:23:33.985493    6700 main.go:141] libmachine: Creating SSH key...
	I0709 10:23:34.350447    6700 main.go:141] libmachine: Creating VM...
	I0709 10:23:34.350447    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:23:37.267930    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:23:37.267930    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:37.268122    6700 main.go:141] libmachine: Using switch "Default Switch"
	I0709 10:23:37.268122    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:23:39.039137    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:23:39.040055    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:39.040055    6700 main.go:141] libmachine: Creating VHD
	I0709 10:23:39.040055    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 10:23:42.846665    6700 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1CAB5C5E-5591-4B25-98CE-5DC8F79B9BFC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 10:23:42.847732    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:42.847732    6700 main.go:141] libmachine: Writing magic tar header
	I0709 10:23:42.847732    6700 main.go:141] libmachine: Writing SSH key tar header
	I0709 10:23:42.856565    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 10:23:46.094711    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:46.094711    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:46.095337    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\disk.vhd' -SizeBytes 20000MB
	I0709 10:23:48.678788    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:48.678788    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:48.679461    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-400600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 10:23:52.397685    6700 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-400600-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 10:23:52.397685    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:52.397809    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-400600-m03 -DynamicMemoryEnabled $false
	I0709 10:23:54.686328    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:54.686530    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:54.686621    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-400600-m03 -Count 2
	I0709 10:23:56.903816    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:56.903816    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:56.904387    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-400600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\boot2docker.iso'
	I0709 10:23:59.497636    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:59.497636    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:59.497636    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-400600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\disk.vhd'
	I0709 10:24:02.203712    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:02.203712    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:02.203712    6700 main.go:141] libmachine: Starting VM...
	I0709 10:24:02.203712    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-400600-m03
	I0709 10:24:05.372618    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:05.372618    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:05.372618    6700 main.go:141] libmachine: Waiting for host to start...
	I0709 10:24:05.372744    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:07.762776    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:07.762854    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:07.762854    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:10.369792    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:10.369792    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:11.375501    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:13.682719    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:13.682785    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:13.682893    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:16.341977    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:16.341977    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:17.348138    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:19.625776    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:19.626793    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:19.626865    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:22.298277    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:22.298277    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:23.305923    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:25.564250    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:25.564320    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:25.564400    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:28.177445    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:28.178396    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:29.182821    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:31.490782    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:31.490782    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:31.490782    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:34.190292    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:34.191293    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:34.191420    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:36.418524    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:36.418524    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:36.418524    6700 machine.go:94] provisionDockerMachine start ...
	I0709 10:24:36.419247    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:38.649258    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:38.649908    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:38.650009    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:41.243798    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:41.243798    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:41.249898    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:24:41.250071    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:24:41.250071    6700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 10:24:41.376640    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 10:24:41.376640    6700 buildroot.go:166] provisioning hostname "ha-400600-m03"
	I0709 10:24:41.376801    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:43.566727    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:43.566727    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:43.566727    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:46.193308    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:46.193600    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:46.199791    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:24:46.200395    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:24:46.200395    6700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-400600-m03 && echo "ha-400600-m03" | sudo tee /etc/hostname
	I0709 10:24:46.348419    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-400600-m03
	
	I0709 10:24:46.348822    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:48.523526    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:48.523526    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:48.524170    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:51.156890    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:51.156890    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:51.165336    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:24:51.166460    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:24:51.166460    6700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-400600-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-400600-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-400600-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 10:24:51.316444    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:24:51.316562    6700 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 10:24:51.316562    6700 buildroot.go:174] setting up certificates
	I0709 10:24:51.316648    6700 provision.go:84] configureAuth start
	I0709 10:24:51.316648    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:53.577201    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:53.577201    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:53.577364    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:56.243194    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:56.243574    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:56.243639    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:58.470485    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:58.470717    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:58.470717    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:01.111864    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:01.112601    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:01.112601    6700 provision.go:143] copyHostCerts
	I0709 10:25:01.112768    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 10:25:01.113077    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 10:25:01.113077    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 10:25:01.113145    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 10:25:01.114893    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 10:25:01.114893    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 10:25:01.114893    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 10:25:01.115437    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 10:25:01.116798    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 10:25:01.116798    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 10:25:01.116798    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 10:25:01.117466    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 10:25:01.118570    6700 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-400600-m03 san=[127.0.0.1 172.18.201.166 ha-400600-m03 localhost minikube]
	I0709 10:25:01.299673    6700 provision.go:177] copyRemoteCerts
	I0709 10:25:01.314182    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 10:25:01.314182    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:03.506447    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:03.506447    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:03.506711    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:06.149043    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:06.149043    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:06.149926    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:06.256655    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9424617s)
	I0709 10:25:06.256775    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 10:25:06.257210    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0709 10:25:06.306320    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 10:25:06.306844    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 10:25:06.354008    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 10:25:06.354466    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 10:25:06.401973    6700 provision.go:87] duration metric: took 15.0852891s to configureAuth
	I0709 10:25:06.401973    6700 buildroot.go:189] setting minikube options for container-runtime
	I0709 10:25:06.403025    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:25:06.403114    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:08.590869    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:08.590869    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:08.590957    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:11.223812    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:11.223812    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:11.229892    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:11.230110    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:11.230110    6700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 10:25:11.347803    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 10:25:11.347897    6700 buildroot.go:70] root file system type: tmpfs
	I0709 10:25:11.348117    6700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 10:25:11.348198    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:13.539025    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:13.539025    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:13.539558    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:16.104022    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:16.104022    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:16.109866    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:16.110561    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:16.110561    6700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.204.161"
	Environment="NO_PROXY=172.18.204.161,172.18.194.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 10:25:16.267369    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.204.161
	Environment=NO_PROXY=172.18.204.161,172.18.194.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 10:25:16.267998    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:18.447678    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:18.447678    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:18.448281    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:21.081667    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:21.081667    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:21.086965    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:21.087746    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:21.087746    6700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 10:25:23.370298    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 10:25:23.370298    6700 machine.go:97] duration metric: took 46.9510639s to provisionDockerMachine
	I0709 10:25:23.370298    6700 client.go:171] duration metric: took 1m58.7610291s to LocalClient.Create
	I0709 10:25:23.370298    6700 start.go:167] duration metric: took 1m58.7611311s to libmachine.API.Create "ha-400600"
	I0709 10:25:23.370298    6700 start.go:293] postStartSetup for "ha-400600-m03" (driver="hyperv")
	I0709 10:25:23.370298    6700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 10:25:23.381348    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 10:25:23.382305    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:25.595109    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:25.595231    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:25.595363    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:28.233048    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:28.233936    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:28.233936    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:28.333550    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9512333s)
	I0709 10:25:28.347276    6700 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 10:25:28.355192    6700 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 10:25:28.355306    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 10:25:28.355708    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 10:25:28.356572    6700 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 10:25:28.358314    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 10:25:28.371350    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 10:25:28.394362    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 10:25:28.449449    6700 start.go:296] duration metric: took 5.0791393s for postStartSetup
	I0709 10:25:28.452230    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:30.637398    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:30.637581    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:30.637833    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:33.241536    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:33.242537    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:33.242537    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:25:33.244976    6700 start.go:128] duration metric: took 2m8.6394644s to createHost
	I0709 10:25:33.244976    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:35.442543    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:35.443317    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:35.443407    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:38.085383    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:38.085518    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:38.092100    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:38.092205    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:38.092205    6700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 10:25:38.217553    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720545938.208908593
	
	I0709 10:25:38.218172    6700 fix.go:216] guest clock: 1720545938.208908593
	I0709 10:25:38.218172    6700 fix.go:229] Guest: 2024-07-09 10:25:38.208908593 -0700 PDT Remote: 2024-07-09 10:25:33.2449769 -0700 PDT m=+570.586302101 (delta=4.963931693s)
	I0709 10:25:38.218240    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:40.500679    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:40.501453    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:40.501453    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:43.193090    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:43.193090    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:43.201427    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:43.202340    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:43.202340    6700 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720545938
	I0709 10:25:43.346308    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 17:25:38 UTC 2024
	
	I0709 10:25:43.346380    6700 fix.go:236] clock set: Tue Jul  9 17:25:38 UTC 2024
	 (err=<nil>)
	I0709 10:25:43.346380    6700 start.go:83] releasing machines lock for "ha-400600-m03", held for 2m18.7412242s
	I0709 10:25:43.346649    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:45.586826    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:45.587346    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:45.587486    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:48.209545    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:48.209545    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:48.212417    6700 out.go:177] * Found network options:
	I0709 10:25:48.214998    6700 out.go:177]   - NO_PROXY=172.18.204.161,172.18.194.29
	W0709 10:25:48.217163    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.217163    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:25:48.219334    6700 out.go:177]   - NO_PROXY=172.18.204.161,172.18.194.29
	W0709 10:25:48.221005    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.221005    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.223052    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.223052    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:25:48.225294    6700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 10:25:48.225882    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:48.239962    6700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 10:25:48.239962    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:50.567215    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:50.567215    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:50.567786    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:50.570775    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:50.570857    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:50.571001    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:53.288591    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:53.288591    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:53.288591    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:53.320991    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:53.321177    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:53.321494    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:53.437066    6700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.211759s)
	I0709 10:25:53.437066    6700 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1970911s)
	W0709 10:25:53.437571    6700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 10:25:53.451941    6700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 10:25:53.481318    6700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 10:25:53.481434    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:25:53.481655    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:25:53.529203    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 10:25:53.565612    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 10:25:53.585057    6700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 10:25:53.597313    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 10:25:53.629796    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:25:53.660689    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 10:25:53.692784    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:25:53.726475    6700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 10:25:53.758932    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 10:25:53.793671    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 10:25:53.827556    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 10:25:53.859317    6700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 10:25:53.891874    6700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 10:25:53.924811    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:54.141022    6700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 10:25:54.176808    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:25:54.193627    6700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 10:25:54.229841    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:25:54.265606    6700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 10:25:54.308905    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:25:54.352191    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:25:54.389134    6700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 10:25:54.452587    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:25:54.478291    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:25:54.527807    6700 ssh_runner.go:195] Run: which cri-dockerd
	I0709 10:25:54.546760    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 10:25:54.565787    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 10:25:54.614962    6700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 10:25:54.810290    6700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 10:25:54.997721    6700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 10:25:54.997840    6700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 10:25:55.043731    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:55.253583    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:25:57.862610    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6090207s)
	I0709 10:25:57.874789    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 10:25:57.912451    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:25:57.951153    6700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 10:25:58.161761    6700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 10:25:58.371132    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:58.576135    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 10:25:58.617942    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:25:58.653973    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:58.877249    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 10:25:58.985980    6700 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 10:25:58.999388    6700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 10:25:59.008478    6700 start.go:562] Will wait 60s for crictl version
	I0709 10:25:59.020694    6700 ssh_runner.go:195] Run: which crictl
	I0709 10:25:59.039259    6700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 10:25:59.097519    6700 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 10:25:59.107756    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:25:59.152282    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:25:59.191299    6700 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 10:25:59.193807    6700 out.go:177]   - env NO_PROXY=172.18.204.161
	I0709 10:25:59.196660    6700 out.go:177]   - env NO_PROXY=172.18.204.161,172.18.194.29
	I0709 10:25:59.199651    6700 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 10:25:59.206504    6700 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 10:25:59.206504    6700 ip.go:210] interface addr: 172.18.192.1/20
	I0709 10:25:59.216500    6700 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 10:25:59.224070    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:25:59.248382    6700 mustload.go:65] Loading cluster: ha-400600
	I0709 10:25:59.249047    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:25:59.249267    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:26:01.405277    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:26:01.405277    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:01.405381    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:26:01.406194    6700 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600 for IP: 172.18.201.166
	I0709 10:26:01.406265    6700 certs.go:194] generating shared ca certs ...
	I0709 10:26:01.406265    6700 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:26:01.406969    6700 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 10:26:01.407361    6700 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 10:26:01.407599    6700 certs.go:256] generating profile certs ...
	I0709 10:26:01.407778    6700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key
	I0709 10:26:01.408344    6700 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea
	I0709 10:26:01.408561    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.204.161 172.18.194.29 172.18.201.166 172.18.207.254]
	I0709 10:26:01.571022    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea ...
	I0709 10:26:01.571022    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea: {Name:mk44a6f67565d8d3f66ae0e785452857941e5f1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:26:01.572320    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea ...
	I0709 10:26:01.573367    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea: {Name:mk59de0b86a8a2193f4a1b38ab929a444a6dae7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:26:01.574104    6700 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt
	I0709 10:26:01.586050    6700 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key
	I0709 10:26:01.587528    6700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key
	I0709 10:26:01.587528    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 10:26:01.587528    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 10:26:01.588065    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 10:26:01.588137    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 10:26:01.588137    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 10:26:01.588137    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 10:26:01.588886    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 10:26:01.589345    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 10:26:01.589525    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 10:26:01.589525    6700 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 10:26:01.590073    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 10:26:01.590280    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 10:26:01.590280    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 10:26:01.590830    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 10:26:01.591204    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 10:26:01.591204    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 10:26:01.591204    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 10:26:01.591858    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:01.591897    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:26:03.805009    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:26:03.805701    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:03.805701    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:26:06.453552    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:26:06.454090    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:06.454284    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:26:06.556930    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0709 10:26:06.565595    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0709 10:26:06.603305    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0709 10:26:06.612246    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0709 10:26:06.655647    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0709 10:26:06.662776    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0709 10:26:06.697065    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0709 10:26:06.704268    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0709 10:26:06.739800    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0709 10:26:06.747470    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0709 10:26:06.783155    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0709 10:26:06.791028    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0709 10:26:06.814725    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 10:26:06.864353    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 10:26:06.914012    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 10:26:06.961286    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 10:26:07.013047    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0709 10:26:07.070964    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0709 10:26:07.120270    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 10:26:07.175260    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 10:26:07.223782    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 10:26:07.272007    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 10:26:07.320578    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 10:26:07.367792    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0709 10:26:07.400738    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0709 10:26:07.444517    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0709 10:26:07.477233    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0709 10:26:07.511570    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0709 10:26:07.543749    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0709 10:26:07.576422    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0709 10:26:07.624031    6700 ssh_runner.go:195] Run: openssl version
	I0709 10:26:07.646849    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 10:26:07.683020    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 10:26:07.691357    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:26:07.704632    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 10:26:07.726067    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 10:26:07.759669    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 10:26:07.792584    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 10:26:07.799786    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:26:07.812372    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 10:26:07.837905    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 10:26:07.870777    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 10:26:07.902540    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:07.910123    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:07.929552    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:07.951160    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 10:26:07.985870    6700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:26:07.992800    6700 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 10:26:07.992800    6700 kubeadm.go:928] updating node {m03 172.18.201.166 8443 v1.30.2 docker true true} ...
	I0709 10:26:07.992800    6700 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-400600-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.201.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 10:26:07.993340    6700 kube-vip.go:115] generating kube-vip config ...
	I0709 10:26:08.006882    6700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0709 10:26:08.036270    6700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0709 10:26:08.036755    6700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0709 10:26:08.049231    6700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 10:26:08.075721    6700 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0709 10:26:08.089122    6700 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0709 10:26:08.108360    6700 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0709 10:26:08.108360    6700 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0709 10:26:08.108360    6700 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0709 10:26:08.108656    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:26:08.108656    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:26:08.121319    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:26:08.123100    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:26:08.123100    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:26:08.144440    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:26:08.144440    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0709 10:26:08.144440    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0709 10:26:08.144440    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0709 10:26:08.144440    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0709 10:26:08.157454    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:26:08.202678    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0709 10:26:08.202678    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0709 10:26:09.492901    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0709 10:26:09.511062    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0709 10:26:09.543208    6700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 10:26:09.575792    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0709 10:26:09.618986    6700 ssh_runner.go:195] Run: grep 172.18.207.254	control-plane.minikube.internal$ /etc/hosts
	I0709 10:26:09.625261    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:26:09.664072    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:26:09.871371    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:26:09.908283    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:26:09.909323    6700 start.go:316] joinCluster: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.18.201.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:26:09.909524    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0709 10:26:09.909524    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:26:12.144942    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:26:12.144942    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:12.145494    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:26:14.769879    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:26:14.769879    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:14.769879    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:26:14.984087    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.074551s)
	I0709 10:26:14.984087    6700 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.18.201.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:26:14.984087    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37dudu.o9hs9ibo2r1ddpqu --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m03 --control-plane --apiserver-advertise-address=172.18.201.166 --apiserver-bind-port=8443"
	I0709 10:27:04.131140    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37dudu.o9hs9ibo2r1ddpqu --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m03 --control-plane --apiserver-advertise-address=172.18.201.166 --apiserver-bind-port=8443": (49.1468144s)
	I0709 10:27:04.131211    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0709 10:27:04.887742    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-400600-m03 minikube.k8s.io/updated_at=2024_07_09T10_27_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=ha-400600 minikube.k8s.io/primary=false
	I0709 10:27:05.086368    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-400600-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0709 10:27:05.297861    6700 start.go:318] duration metric: took 55.3884049s to joinCluster
	I0709 10:27:05.297861    6700 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.18.201.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:27:05.298861    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:27:05.301869    6700 out.go:177] * Verifying Kubernetes components...
	I0709 10:27:05.319867    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:27:05.782536    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:27:05.830797    6700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:27:05.831820    6700 kapi.go:59] client config for ha-400600: &rest.Config{Host:"https://172.18.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0709 10:27:05.831949    6700 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.207.254:8443 with https://172.18.204.161:8443
	I0709 10:27:05.832913    6700 node_ready.go:35] waiting up to 6m0s for node "ha-400600-m03" to be "Ready" ...
	I0709 10:27:05.833207    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:05.833207    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:05.833207    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:05.833207    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:05.847965    6700 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0709 10:27:06.347022    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:06.347022    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:06.347022    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:06.347022    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:06.353148    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:06.838909    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:06.838909    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:06.838909    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:06.838909    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:06.849002    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:07.346403    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:07.346403    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:07.346403    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:07.346403    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:07.353011    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:07.836108    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:07.836108    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:07.836197    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:07.836197    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:07.840780    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:07.842407    6700 node_ready.go:53] node "ha-400600-m03" has status "Ready":"False"
	I0709 10:27:08.343254    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:08.343254    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:08.343254    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:08.343254    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:08.353803    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:08.836429    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:08.836429    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:08.836429    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:08.836429    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:08.843669    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:09.340224    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:09.340461    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:09.340461    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:09.340461    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:09.344300    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:09.842559    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:09.842629    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:09.842629    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:09.842629    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:09.847302    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:09.848393    6700 node_ready.go:53] node "ha-400600-m03" has status "Ready":"False"
	I0709 10:27:10.344426    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:10.344426    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:10.344426    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:10.344426    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:10.348931    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:10.835462    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:10.835523    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:10.835523    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:10.835523    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:10.839944    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:11.341841    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:11.341841    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:11.342065    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:11.342065    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:11.347373    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:11.846325    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:11.846395    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:11.846395    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:11.846395    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:11.851280    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:11.852190    6700 node_ready.go:53] node "ha-400600-m03" has status "Ready":"False"
	I0709 10:27:12.340241    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:12.340241    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:12.340241    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:12.340241    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:12.345498    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:12.834669    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:12.834669    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:12.834669    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:12.834669    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:12.847267    6700 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0709 10:27:13.340311    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:13.340424    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:13.340424    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:13.340424    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:13.345900    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:13.842898    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:13.842898    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:13.842898    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:13.842898    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:13.846196    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.333861    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:14.333861    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.333861    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.333958    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.339937    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:14.340716    6700 node_ready.go:49] node "ha-400600-m03" has status "Ready":"True"
	I0709 10:27:14.340716    6700 node_ready.go:38] duration metric: took 8.5077823s for node "ha-400600-m03" to be "Ready" ...
	I0709 10:27:14.340716    6700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:27:14.340716    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:14.340716    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.340716    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.340716    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.350422    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:27:14.359360    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.359891    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zbxnq
	I0709 10:27:14.359891    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.359891    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.360083    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.363232    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.364538    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:14.364538    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.364538    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.364638    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.367386    6700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:27:14.368732    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.368732    6700 pod_ready.go:81] duration metric: took 8.8409ms for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.368732    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.368861    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zst2x
	I0709 10:27:14.368861    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.368861    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.368861    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.375686    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:14.376418    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:14.376418    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.376418    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.376418    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.379518    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.380791    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.380871    6700 pod_ready.go:81] duration metric: took 12.0591ms for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.380871    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.380954    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600
	I0709 10:27:14.380954    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.380954    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.381027    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.384529    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.385878    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:14.385951    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.385951    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.385951    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.389533    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.390517    6700 pod_ready.go:92] pod "etcd-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.390517    6700 pod_ready.go:81] duration metric: took 9.6455ms for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.390517    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.390517    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:27:14.390517    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.390517    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.390517    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.427658    6700 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0709 10:27:14.428957    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:14.429015    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.429015    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.429015    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.432651    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.433374    6700 pod_ready.go:92] pod "etcd-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.433420    6700 pod_ready.go:81] duration metric: took 42.9031ms for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.433420    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.536590    6700 request.go:629] Waited for 103.1699ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m03
	I0709 10:27:14.536900    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m03
	I0709 10:27:14.536900    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.536900    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.536900    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.544395    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:14.740969    6700 request.go:629] Waited for 195.8636ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:14.741184    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:14.741386    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.741386    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.741386    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.745404    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:14.747200    6700 pod_ready.go:92] pod "etcd-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.747261    6700 pod_ready.go:81] duration metric: took 313.8404ms for pod "etcd-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.747319    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.944441    6700 request.go:629] Waited for 197.0043ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600
	I0709 10:27:14.944903    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600
	I0709 10:27:14.944957    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.944957    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.944957    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.950415    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:15.149670    6700 request.go:629] Waited for 198.3095ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:15.149816    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:15.149816    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.149816    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.149816    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.155987    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:15.157141    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:15.157141    6700 pod_ready.go:81] duration metric: took 409.8209ms for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.157141    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.335298    6700 request.go:629] Waited for 177.7955ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m02
	I0709 10:27:15.335298    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m02
	I0709 10:27:15.335298    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.335298    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.335298    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.339932    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.539569    6700 request.go:629] Waited for 197.9522ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:15.539672    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:15.539672    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.539672    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.539672    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.544086    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.545985    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:15.545985    6700 pod_ready.go:81] duration metric: took 388.8426ms for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.545985    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.745693    6700 request.go:629] Waited for 199.5933ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m03
	I0709 10:27:15.745896    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m03
	I0709 10:27:15.745896    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.746006    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.746006    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.750473    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.934181    6700 request.go:629] Waited for 181.7701ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:15.934516    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:15.934516    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.934516    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.934516    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.938984    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.940780    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:15.940780    6700 pod_ready.go:81] duration metric: took 394.7947ms for pod "kube-apiserver-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.940780    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.137064    6700 request.go:629] Waited for 196.1844ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:27:16.137537    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:27:16.137537    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.137623    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.137623    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.143355    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:16.341084    6700 request.go:629] Waited for 196.5253ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:16.341564    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:16.341564    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.341564    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.341564    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.348497    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:16.349377    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:16.349377    6700 pod_ready.go:81] duration metric: took 408.5959ms for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.349522    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.546214    6700 request.go:629] Waited for 196.3771ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:27:16.546322    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:27:16.546459    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.546459    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.546459    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.551893    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:16.734346    6700 request.go:629] Waited for 180.5121ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:16.734598    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:16.734812    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.734812    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.734812    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.739241    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:16.740302    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:16.740388    6700 pod_ready.go:81] duration metric: took 390.7789ms for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.740388    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.937456    6700 request.go:629] Waited for 196.7992ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:16.937715    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:16.937715    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.937715    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.937715    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.942313    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:17.143614    6700 request.go:629] Waited for 199.5741ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.143879    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.143879    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.143976    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.143976    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.148378    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:17.346530    6700 request.go:629] Waited for 93.3465ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:17.346530    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:17.346530    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.346530    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.346530    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.352714    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:17.548703    6700 request.go:629] Waited for 194.5504ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.548763    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.548763    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.548763    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.548763    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.554340    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:17.755334    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:17.755334    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.755409    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.755409    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.762797    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:17.943994    6700 request.go:629] Waited for 179.6134ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.944208    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.944280    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.944280    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.944280    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.954678    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:17.956278    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:17.956336    6700 pod_ready.go:81] duration metric: took 1.2159447s for pod "kube-controller-manager-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:17.956336    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.148614    6700 request.go:629] Waited for 192.0975ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:27:18.148837    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:27:18.148837    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.148837    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.148837    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.154655    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:18.336921    6700 request.go:629] Waited for 180.6165ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:18.337120    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:18.337120    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.337248    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.337248    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.341516    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:18.342441    6700 pod_ready.go:92] pod "kube-proxy-7k7w8" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:18.342441    6700 pod_ready.go:81] duration metric: took 386.0362ms for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.342441    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.542799    6700 request.go:629] Waited for 199.8068ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:27:18.542799    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:27:18.542799    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.542799    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.542799    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.548800    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:18.747347    6700 request.go:629] Waited for 196.9674ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:18.747567    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:18.747567    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.747567    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.747567    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.753786    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:18.754673    6700 pod_ready.go:92] pod "kube-proxy-djlzm" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:18.754673    6700 pod_ready.go:81] duration metric: took 412.2311ms for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.754673    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7rdj" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.936433    6700 request.go:629] Waited for 181.5939ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7rdj
	I0709 10:27:18.936433    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7rdj
	I0709 10:27:18.936433    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.936433    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.936433    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.941426    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:19.140044    6700 request.go:629] Waited for 197.2849ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:19.140044    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:19.140044    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.140044    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.140044    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.145538    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:19.146906    6700 pod_ready.go:92] pod "kube-proxy-q7rdj" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:19.146999    6700 pod_ready.go:81] duration metric: took 392.232ms for pod "kube-proxy-q7rdj" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.146999    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.343415    6700 request.go:629] Waited for 196.168ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:27:19.343415    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:27:19.343645    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.343645    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.343645    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.348909    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:19.533722    6700 request.go:629] Waited for 183.928ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:19.533722    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:19.533722    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.533722    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.533722    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.541714    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:19.543384    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:19.543384    6700 pod_ready.go:81] duration metric: took 396.3839ms for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.543384    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.739484    6700 request.go:629] Waited for 195.6193ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:27:19.739726    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:27:19.739788    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.739813    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.739813    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.744225    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:19.944999    6700 request.go:629] Waited for 199.2037ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:19.944999    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:19.944999    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.944999    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.944999    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.951129    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:19.952107    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:19.952221    6700 pod_ready.go:81] duration metric: took 408.7363ms for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.952221    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:20.148888    6700 request.go:629] Waited for 196.157ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m03
	I0709 10:27:20.149022    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m03
	I0709 10:27:20.149022    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.149022    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.149022    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.159012    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:27:20.337749    6700 request.go:629] Waited for 177.4541ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:20.338051    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:20.338051    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.338051    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.338051    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.342665    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:20.344199    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:20.344199    6700 pod_ready.go:81] duration metric: took 391.9767ms for pod "kube-scheduler-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:20.344199    6700 pod_ready.go:38] duration metric: took 6.003468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:27:20.344199    6700 api_server.go:52] waiting for apiserver process to appear ...
	I0709 10:27:20.358274    6700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:27:20.388561    6700 api_server.go:72] duration metric: took 15.090664s to wait for apiserver process to appear ...
	I0709 10:27:20.388638    6700 api_server.go:88] waiting for apiserver healthz status ...
	I0709 10:27:20.388688    6700 api_server.go:253] Checking apiserver healthz at https://172.18.204.161:8443/healthz ...
	I0709 10:27:20.399483    6700 api_server.go:279] https://172.18.204.161:8443/healthz returned 200:
	ok
	I0709 10:27:20.399892    6700 round_trippers.go:463] GET https://172.18.204.161:8443/version
	I0709 10:27:20.399990    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.399990    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.399990    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.401575    6700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:27:20.401956    6700 api_server.go:141] control plane version: v1.30.2
	I0709 10:27:20.401956    6700 api_server.go:131] duration metric: took 13.2685ms to wait for apiserver health ...
	I0709 10:27:20.401956    6700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 10:27:20.537937    6700 request.go:629] Waited for 135.7777ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.538396    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.538396    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.538396    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.538396    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.549615    6700 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0709 10:27:20.559823    6700 system_pods.go:59] 24 kube-system pods found
	I0709 10:27:20.559823    6700 system_pods.go:61] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "etcd-ha-400600-m03" [243b6937-3e8a-4141-9caf-c62c6a5ff30a] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kindnet-9qlks" [902e6330-70e1-4dc7-abdb-c7fbc7bfc051] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-apiserver-ha-400600-m03" [ace87bbb-a5c5-40ca-a4d3-bc49bbc0e75b] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-controller-manager-ha-400600-m03" [c44033e8-cb30-4957-b85c-ae544b56ac2a] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-proxy-q7rdj" [b8c183f7-8c5e-4103-bb6d-177b36a33a55] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-scheduler-ha-400600-m03" [a21ac894-2f56-459b-8c90-fa4539572859] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-vip-ha-400600-m03" [03f3ea79-c50b-4392-8c13-5e9b0c168523] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:27:20.559823    6700 system_pods.go:74] duration metric: took 157.8661ms to wait for pod list to return data ...
	I0709 10:27:20.559823    6700 default_sa.go:34] waiting for default service account to be created ...
	I0709 10:27:20.740293    6700 request.go:629] Waited for 180.2859ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:27:20.740293    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:27:20.740293    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.740293    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.740595    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.745269    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:20.746403    6700 default_sa.go:45] found service account: "default"
	I0709 10:27:20.746477    6700 default_sa.go:55] duration metric: took 186.6537ms for default service account to be created ...
	I0709 10:27:20.746477    6700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 10:27:20.944581    6700 request.go:629] Waited for 197.9005ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.944844    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.944844    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.944914    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.944914    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.955373    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:20.965560    6700 system_pods.go:86] 24 kube-system pods found
	I0709 10:27:20.965560    6700 system_pods.go:89] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "etcd-ha-400600-m03" [243b6937-3e8a-4141-9caf-c62c6a5ff30a] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kindnet-9qlks" [902e6330-70e1-4dc7-abdb-c7fbc7bfc051] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-apiserver-ha-400600-m03" [ace87bbb-a5c5-40ca-a4d3-bc49bbc0e75b] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-controller-manager-ha-400600-m03" [c44033e8-cb30-4957-b85c-ae544b56ac2a] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-proxy-q7rdj" [b8c183f7-8c5e-4103-bb6d-177b36a33a55] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-scheduler-ha-400600-m03" [a21ac894-2f56-459b-8c90-fa4539572859] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-vip-ha-400600-m03" [03f3ea79-c50b-4392-8c13-5e9b0c168523] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:27:20.965560    6700 system_pods.go:126] duration metric: took 219.0224ms to wait for k8s-apps to be running ...
	I0709 10:27:20.966080    6700 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 10:27:20.976297    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:27:21.005793    6700 system_svc.go:56] duration metric: took 39.7125ms WaitForService to wait for kubelet
	I0709 10:27:21.005793    6700 kubeadm.go:576] duration metric: took 15.7078945s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 10:27:21.005793    6700 node_conditions.go:102] verifying NodePressure condition ...
	I0709 10:27:21.133948    6700 request.go:629] Waited for 127.8728ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes
	I0709 10:27:21.134055    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes
	I0709 10:27:21.134055    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:21.134055    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:21.134358    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:21.141082    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:21.143112    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:27:21.143412    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:27:21.143412    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:27:21.143412    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:27:21.143412    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:27:21.143412    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:27:21.143412    6700 node_conditions.go:105] duration metric: took 137.6184ms to run NodePressure ...
	I0709 10:27:21.143514    6700 start.go:240] waiting for startup goroutines ...
	I0709 10:27:21.143610    6700 start.go:254] writing updated cluster config ...
	I0709 10:27:21.156152    6700 ssh_runner.go:195] Run: rm -f paused
	I0709 10:27:21.302072    6700 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0709 10:27:21.308827    6700 out.go:177] * Done! kubectl is now configured to use "ha-400600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 09 17:19:29 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:19:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/58c2b2ac6f9e2690b6605e899ab9b099d191928e5b3f207ef4c238737600fc46/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:19:29 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:19:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/699b5efc73ef82c252861888d136c55df7adefdec0dc24464f2c7edc7d01ef23/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.866062905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.866798312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.867245717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.867936224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:29 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:19:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e32192946816aeee2298c423db0732ff45aa771356c2af4387ded672c3fd128f/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218067855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218486346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218599944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218895938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.258839910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.259239502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.259405198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.259756591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349077173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349220773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349241273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349357273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:00 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:28:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe8ea0c55db7cfbdee4483c64424b22daabd3958e9bb8b585b18251b610b05f1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 17:28:01 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:28:01Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.286889643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.287020443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.287036843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.287472145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c38d753e09788       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   fe8ea0c55db7c       busybox-fc5497c4f-q8dt8
	548d2c1ac97b7       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   e32192946816a       coredns-7db6d8ff4d-zst2x
	4ff3baadb8c8f       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   699b5efc73ef8       coredns-7db6d8ff4d-zbxnq
	64effc0264832       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   58c2b2ac6f9e2       storage-provisioner
	eac7b8bb4f49b       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago        Running             kindnet-cni               0                   5f382f5723fff       kindnet-qjr4d
	42bb9c056d496       53c535741fb44                                                                                         9 minutes ago        Running             kube-proxy                0                   0eadaf19a58a0       kube-proxy-7k7w8
	c25489a3f41d7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   cfb028a1171c9       kube-vip-ha-400600
	e915adad1065b       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   a71d3d70369e8       etcd-ha-400600
	a1cc87b040f15       e874818b3caac                                                                                         10 minutes ago       Running             kube-controller-manager   0                   41930f39bef9d       kube-controller-manager-ha-400600
	fef6bd73c6517       56ce0fd9fb532                                                                                         10 minutes ago       Running             kube-apiserver            0                   71eaea10f68b9       kube-apiserver-ha-400600
	88d916e2452ab       7820c83aa1394                                                                                         10 minutes ago       Running             kube-scheduler            0                   367ca65f8f005       kube-scheduler-ha-400600
	
	
	==> coredns [4ff3baadb8c8] <==
	[INFO] 10.244.0.4:44729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124001s
	[INFO] 10.244.0.4:44116 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.053769591s
	[INFO] 10.244.0.4:55715 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181601s
	[INFO] 10.244.0.4:38152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028920802s
	[INFO] 10.244.0.4:54687 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000298901s
	[INFO] 10.244.0.4:39755 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000232701s
	[INFO] 10.244.0.4:47376 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000238201s
	[INFO] 10.244.1.2:57447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105601s
	[INFO] 10.244.1.2:45879 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000899s
	[INFO] 10.244.2.2:59081 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074s
	[INFO] 10.244.2.2:48748 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000645s
	[INFO] 10.244.2.2:59259 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001308s
	[INFO] 10.244.0.4:41332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001594s
	[INFO] 10.244.1.2:38959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120301s
	[INFO] 10.244.1.2:58703 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261701s
	[INFO] 10.244.1.2:53423 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001448s
	[INFO] 10.244.2.2:38018 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001515s
	[INFO] 10.244.2.2:44098 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001053s
	[INFO] 10.244.2.2:41721 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000598s
	[INFO] 10.244.0.4:50957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119301s
	[INFO] 10.244.1.2:33071 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189s
	[INFO] 10.244.1.2:52032 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000917s
	[INFO] 10.244.2.2:37018 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192101s
	[INFO] 10.244.2.2:42620 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001187s
	[INFO] 10.244.2.2:60585 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001285s
	
	
	==> coredns [548d2c1ac97b] <==
	[INFO] 10.244.2.2:43489 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000733s
	[INFO] 10.244.2.2:45418 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002134s
	[INFO] 10.244.0.4:39856 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002405s
	[INFO] 10.244.1.2:53039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011866542s
	[INFO] 10.244.1.2:49255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000642s
	[INFO] 10.244.1.2:37031 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001088s
	[INFO] 10.244.1.2:33874 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028676502s
	[INFO] 10.244.1.2:41914 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162s
	[INFO] 10.244.1.2:43276 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000944s
	[INFO] 10.244.2.2:37123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227601s
	[INFO] 10.244.2.2:56961 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000663s
	[INFO] 10.244.2.2:40967 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001742s
	[INFO] 10.244.2.2:55610 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001329s
	[INFO] 10.244.2.2:33679 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129601s
	[INFO] 10.244.0.4:45218 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152501s
	[INFO] 10.244.0.4:43941 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164801s
	[INFO] 10.244.0.4:59289 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000399901s
	[INFO] 10.244.1.2:48110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000693s
	[INFO] 10.244.2.2:59625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001848s
	[INFO] 10.244.0.4:37225 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000254101s
	[INFO] 10.244.0.4:54435 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182901s
	[INFO] 10.244.0.4:51817 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000219301s
	[INFO] 10.244.1.2:41079 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001904s
	[INFO] 10.244.1.2:46791 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104s
	[INFO] 10.244.2.2:34112 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142801s
	
	
	==> describe nodes <==
	Name:               ha-400600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-400600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=ha-400600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T10_19_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 17:19:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-400600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:28:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 17:28:06 +0000   Tue, 09 Jul 2024 17:19:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 17:28:06 +0000   Tue, 09 Jul 2024 17:19:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 17:28:06 +0000   Tue, 09 Jul 2024 17:19:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 17:28:06 +0000   Tue, 09 Jul 2024 17:19:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.204.161
	  Hostname:    ha-400600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 57e7ded038fc422cb9252166abd87e14
	  System UUID:                1e1a00cd-004d-6e42-b1fb-ad4e24bc426a
	  Boot ID:                    650ebcd8-63b1-4424-9b06-df7a08fde84d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q8dt8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 coredns-7db6d8ff4d-zbxnq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m45s
	  kube-system                 coredns-7db6d8ff4d-zst2x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m45s
	  kube-system                 etcd-ha-400600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m58s
	  kube-system                 kindnet-qjr4d                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-ha-400600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 kube-controller-manager-ha-400600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-proxy-7k7w8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-ha-400600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-vip-ha-400600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-400600 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m58s (x2 over 9m58s)  kubelet          Node ha-400600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m58s (x2 over 9m58s)  kubelet          Node ha-400600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x2 over 9m58s)  kubelet          Node ha-400600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m45s                  node-controller  Node ha-400600 event: Registered Node ha-400600 in Controller
	  Normal  NodeReady                9m35s                  kubelet          Node ha-400600 status is now: NodeReady
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-400600 event: Registered Node ha-400600 in Controller
	  Normal  RegisteredNode           104s                   node-controller  Node ha-400600 event: Registered Node ha-400600 in Controller
	
	
	Name:               ha-400600-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-400600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=ha-400600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T10_23_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 17:23:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-400600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:28:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 17:28:37 +0000   Tue, 09 Jul 2024 17:23:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 17:28:37 +0000   Tue, 09 Jul 2024 17:23:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 17:28:37 +0000   Tue, 09 Jul 2024 17:23:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 17:28:37 +0000   Tue, 09 Jul 2024 17:23:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.194.29
	  Hostname:    ha-400600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6190dbe481a94900add605d2b7c6d5ff
	  System UUID:                42e9a45a-f84a-924a-bfd7-75e67dc20830
	  Boot ID:                    17ff975b-644a-48a2-9725-dda2d103583a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sf672                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 etcd-ha-400600-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m58s
	  kube-system                 kindnet-fnjm5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-400600-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-controller-manager-ha-400600-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-proxy-djlzm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-400600-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-vip-ha-400600-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)  kubelet          Node ha-400600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)  kubelet          Node ha-400600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)  kubelet          Node ha-400600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m59s                node-controller  Node ha-400600-m02 event: Registered Node ha-400600-m02 in Controller
	  Normal  RegisteredNode           5m40s                node-controller  Node ha-400600-m02 event: Registered Node ha-400600-m02 in Controller
	  Normal  RegisteredNode           104s                 node-controller  Node ha-400600-m02 event: Registered Node ha-400600-m02 in Controller
	
	
	Name:               ha-400600-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-400600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=ha-400600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T10_27_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 17:26:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-400600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:29:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 17:28:29 +0000   Tue, 09 Jul 2024 17:26:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 17:28:29 +0000   Tue, 09 Jul 2024 17:26:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 17:28:29 +0000   Tue, 09 Jul 2024 17:26:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 17:28:29 +0000   Tue, 09 Jul 2024 17:27:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.201.166
	  Hostname:    ha-400600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 59cfe4c3ff2745f9a8a38dc1ed715dde
	  System UUID:                db9566e8-cf7d-9a47-8e9c-ca188d985bba
	  Boot ID:                    2f814ff1-f7a4-447e-8b19-a1452ef7ba03
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wvs72                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 etcd-ha-400600-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m3s
	  kube-system                 kindnet-9qlks                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m7s
	  kube-system                 kube-apiserver-ha-400600-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-controller-manager-ha-400600-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-proxy-q7rdj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-scheduler-ha-400600-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-vip-ha-400600-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node ha-400600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node ha-400600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node ha-400600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m6s                 node-controller  Node ha-400600-m03 event: Registered Node ha-400600-m03 in Controller
	  Normal  RegisteredNode           2m5s                 node-controller  Node ha-400600-m03 event: Registered Node ha-400600-m03 in Controller
	  Normal  RegisteredNode           105s                 node-controller  Node ha-400600-m03 event: Registered Node ha-400600-m03 in Controller
	
	
	==> dmesg <==
	[  +1.101871] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.961487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.989969] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.152280] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul 9 17:18] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.107159] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.528428] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.180999] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.237108] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.834158] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.189260] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.192051] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.284193] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[ +11.560710] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.105947] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.087945] systemd-fstab-generator[1664]: Ignoring "noauto" option for root device
	[  +6.396440] systemd-fstab-generator[1871]: Ignoring "noauto" option for root device
	[  +0.094470] kauditd_printk_skb: 70 callbacks suppressed
	[Jul 9 17:19] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.498662] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[ +15.136047] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.926042] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.544792] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [e915adad1065] <==
	{"level":"info","ts":"2024-07-09T17:26:58.392617Z","caller":"traceutil/trace.go:171","msg":"trace[2053988484] range","detail":"{range_begin:/registry/events/default/ha-400600-m03.17e09b7cb227808e; range_end:; response_count:1; response_revision:1471; }","duration":"191.228695ms","start":"2024-07-09T17:26:58.20138Z","end":"2024-07-09T17:26:58.392609Z","steps":["trace[2053988484] 'agreement among raft nodes before linearized reading'  (duration: 191.096495ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T17:26:58.394697Z","caller":"traceutil/trace.go:171","msg":"trace[1434202844] transaction","detail":"{read_only:false; response_revision:1472; number_of_response:1; }","duration":"186.78859ms","start":"2024-07-09T17:26:58.207898Z","end":"2024-07-09T17:26:58.394686Z","steps":["trace[1434202844] 'process raft request'  (duration: 186.70469ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T17:26:59.424695Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"18a4ea4811f9417a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-09T17:26:59.755601Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://172.18.201.166:2380/version","remote-member-id":"18a4ea4811f9417a","error":"Get \"https://172.18.201.166:2380/version\": dial tcp 172.18.201.166:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-09T17:26:59.755861Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"18a4ea4811f9417a","error":"Get \"https://172.18.201.166:2380/version\": dial tcp 172.18.201.166:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-09T17:27:00.410744Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"18a4ea4811f9417a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-09T17:27:01.347278Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"18a4ea4811f9417a"}
	{"level":"info","ts":"2024-07-09T17:27:01.37959Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"77d3936d53d35644","remote-peer-id":"18a4ea4811f9417a"}
	{"level":"info","ts":"2024-07-09T17:27:01.38028Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"77d3936d53d35644","remote-peer-id":"18a4ea4811f9417a"}
	{"level":"warn","ts":"2024-07-09T17:27:01.409877Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"18a4ea4811f9417a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-09T17:27:01.471022Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"77d3936d53d35644","to":"18a4ea4811f9417a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-09T17:27:01.471257Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"77d3936d53d35644","remote-peer-id":"18a4ea4811f9417a"}
	{"level":"info","ts":"2024-07-09T17:27:01.54202Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"77d3936d53d35644","to":"18a4ea4811f9417a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-09T17:27:01.542075Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"77d3936d53d35644","remote-peer-id":"18a4ea4811f9417a"}
	{"level":"warn","ts":"2024-07-09T17:27:02.412515Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"18a4ea4811f9417a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-09T17:27:03.410965Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"18a4ea4811f9417a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-09T17:27:03.925241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77d3936d53d35644 switched to configuration voters=(1775801748350910842 8634407008366450244 9836222674861180420)"}
	{"level":"info","ts":"2024-07-09T17:27:03.925335Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"3fc60931dad95305","local-member-id":"77d3936d53d35644"}
	{"level":"info","ts":"2024-07-09T17:27:03.925781Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"77d3936d53d35644","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"18a4ea4811f9417a"}
	{"level":"info","ts":"2024-07-09T17:27:11.830986Z","caller":"traceutil/trace.go:171","msg":"trace[1362457575] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"169.270978ms","start":"2024-07-09T17:27:11.661701Z","end":"2024-07-09T17:27:11.830972Z","steps":["trace[1362457575] 'process raft request'  (duration: 169.168378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T17:27:59.476892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.825927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-sf672\" ","response":"range_response_count:1 size:1813"}
	{"level":"info","ts":"2024-07-09T17:27:59.477043Z","caller":"traceutil/trace.go:171","msg":"trace[1580678671] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-sf672; range_end:; response_count:1; response_revision:1719; }","duration":"111.994627ms","start":"2024-07-09T17:27:59.365034Z","end":"2024-07-09T17:27:59.477028Z","steps":["trace[1580678671] 'agreement among raft nodes before linearized reading'  (duration: 88.2956ms)","trace[1580678671] 'range keys from in-memory index tree'  (duration: 23.495727ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-09T17:28:59.620517Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1053}
	{"level":"info","ts":"2024-07-09T17:28:59.794582Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1053,"took":"173.419788ms","hash":2395273001,"current-db-size-bytes":3538944,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2023424,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-07-09T17:28:59.794797Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2395273001,"revision":1053,"compact-revision":-1}
	
	
	==> kernel <==
	 17:29:04 up 12 min,  0 users,  load average: 0.48, 0.42, 0.23
	Linux ha-400600 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [eac7b8bb4f49] <==
	I0709 17:28:18.115576       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	I0709 17:28:28.127408       1 main.go:223] Handling node with IPs: map[172.18.204.161:{}]
	I0709 17:28:28.127535       1 main.go:227] handling current node
	I0709 17:28:28.127550       1 main.go:223] Handling node with IPs: map[172.18.194.29:{}]
	I0709 17:28:28.127558       1 main.go:250] Node ha-400600-m02 has CIDR [10.244.1.0/24] 
	I0709 17:28:28.127978       1 main.go:223] Handling node with IPs: map[172.18.201.166:{}]
	I0709 17:28:28.128067       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	I0709 17:28:38.143458       1 main.go:223] Handling node with IPs: map[172.18.204.161:{}]
	I0709 17:28:38.143538       1 main.go:227] handling current node
	I0709 17:28:38.143594       1 main.go:223] Handling node with IPs: map[172.18.194.29:{}]
	I0709 17:28:38.143602       1 main.go:250] Node ha-400600-m02 has CIDR [10.244.1.0/24] 
	I0709 17:28:38.144319       1 main.go:223] Handling node with IPs: map[172.18.201.166:{}]
	I0709 17:28:38.144336       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	I0709 17:28:48.163716       1 main.go:223] Handling node with IPs: map[172.18.204.161:{}]
	I0709 17:28:48.163835       1 main.go:227] handling current node
	I0709 17:28:48.163859       1 main.go:223] Handling node with IPs: map[172.18.194.29:{}]
	I0709 17:28:48.163868       1 main.go:250] Node ha-400600-m02 has CIDR [10.244.1.0/24] 
	I0709 17:28:48.164234       1 main.go:223] Handling node with IPs: map[172.18.201.166:{}]
	I0709 17:28:48.164406       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	I0709 17:28:58.174972       1 main.go:223] Handling node with IPs: map[172.18.204.161:{}]
	I0709 17:28:58.175035       1 main.go:227] handling current node
	I0709 17:28:58.175055       1 main.go:223] Handling node with IPs: map[172.18.194.29:{}]
	I0709 17:28:58.175066       1 main.go:250] Node ha-400600-m02 has CIDR [10.244.1.0/24] 
	I0709 17:28:58.175521       1 main.go:223] Handling node with IPs: map[172.18.201.166:{}]
	I0709 17:28:58.175666       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [fef6bd73c651] <==
	I0709 17:19:05.026394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 17:19:05.065796       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 17:19:05.096555       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 17:19:18.722034       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0709 17:19:18.998060       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0709 17:26:58.430052       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0709 17:26:58.430225       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0709 17:26:58.430398       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.6µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0709 17:26:58.431518       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0709 17:26:58.432599       1 timeout.go:142] post-timeout activity - time-elapsed: 2.573703ms, PATCH "/api/v1/namespaces/default/events/ha-400600-m03.17e09b7cb22757ed" result: <nil>
	E0709 17:28:06.153264       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53128: use of closed network connection
	E0709 17:28:06.649244       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53130: use of closed network connection
	E0709 17:28:07.105622       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53132: use of closed network connection
	E0709 17:28:07.635482       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53134: use of closed network connection
	E0709 17:28:08.126944       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53137: use of closed network connection
	E0709 17:28:08.589925       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53139: use of closed network connection
	E0709 17:28:09.078855       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53141: use of closed network connection
	E0709 17:28:09.536295       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53143: use of closed network connection
	E0709 17:28:09.995780       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53145: use of closed network connection
	E0709 17:28:10.782882       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53148: use of closed network connection
	E0709 17:28:21.213080       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53150: use of closed network connection
	E0709 17:28:21.660883       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53152: use of closed network connection
	E0709 17:28:32.114923       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53154: use of closed network connection
	E0709 17:28:32.587010       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53157: use of closed network connection
	E0709 17:28:43.052063       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53159: use of closed network connection
	
	
	==> kube-controller-manager [a1cc87b040f1] <==
	I0709 17:19:28.952762       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0709 17:19:31.136255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="142.298µs"
	I0709 17:19:31.242075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.374011ms"
	I0709 17:19:31.242908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="229.298µs"
	I0709 17:19:31.307942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.336414ms"
	I0709 17:19:31.308851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="294.998µs"
	I0709 17:23:00.969395       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-400600-m02\" does not exist"
	I0709 17:23:00.993276       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-400600-m02" podCIDRs=["10.244.1.0/24"]
	I0709 17:23:04.001953       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-400600-m02"
	I0709 17:26:57.596782       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-400600-m03\" does not exist"
	I0709 17:26:57.620896       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-400600-m03" podCIDRs=["10.244.2.0/24"]
	I0709 17:26:59.106707       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-400600-m03"
	I0709 17:27:59.401299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.062004ms"
	I0709 17:27:59.718081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="316.499459ms"
	I0709 17:27:59.870359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="152.125272ms"
	I0709 17:27:59.904628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.613538ms"
	I0709 17:27:59.905217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.5µs"
	I0709 17:27:59.993259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.607351ms"
	I0709 17:27:59.993626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="283.1µs"
	I0709 17:28:00.923962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="192.701µs"
	I0709 17:28:02.692544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.401608ms"
	I0709 17:28:02.762764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.876448ms"
	I0709 17:28:02.762864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.9µs"
	I0709 17:28:03.429454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.097789ms"
	I0709 17:28:03.429767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.2µs"
	
	
	==> kube-proxy [42bb9c056d49] <==
	I0709 17:19:20.229090       1 server_linux.go:69] "Using iptables proxy"
	I0709 17:19:20.242853       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.204.161"]
	I0709 17:19:20.376245       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 17:19:20.376293       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 17:19:20.376313       1 server_linux.go:165] "Using iptables Proxier"
	I0709 17:19:20.381806       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 17:19:20.382702       1 server.go:872] "Version info" version="v1.30.2"
	I0709 17:19:20.382799       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 17:19:20.384144       1 config.go:192] "Starting service config controller"
	I0709 17:19:20.384291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 17:19:20.384609       1 config.go:101] "Starting endpoint slice config controller"
	I0709 17:19:20.384648       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 17:19:20.385730       1 config.go:319] "Starting node config controller"
	I0709 17:19:20.385761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 17:19:20.484754       1 shared_informer.go:320] Caches are synced for service config
	I0709 17:19:20.486064       1 shared_informer.go:320] Caches are synced for node config
	I0709 17:19:20.486091       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [88d916e2452a] <==
	W0709 17:19:02.503326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.503419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.574447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 17:19:02.574482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 17:19:02.621919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.623662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.662666       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0709 17:19:02.662734       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0709 17:19:02.790862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.791223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.880036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 17:19:02.880536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 17:19:02.889363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.889389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.897230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 17:19:02.897362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 17:19:02.908558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 17:19:02.908727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 17:19:03.086449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 17:19:03.086608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 17:19:05.328236       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0709 17:26:57.737228       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q7rdj\": pod kube-proxy-q7rdj is already assigned to node \"ha-400600-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q7rdj" node="ha-400600-m03"
	E0709 17:26:57.737423       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b8c183f7-8c5e-4103-bb6d-177b36a33a55(kube-system/kube-proxy-q7rdj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-q7rdj"
	E0709 17:26:57.738766       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q7rdj\": pod kube-proxy-q7rdj is already assigned to node \"ha-400600-m03\"" pod="kube-system/kube-proxy-q7rdj"
	I0709 17:26:57.738919       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q7rdj" node="ha-400600-m03"
	
	
	==> kubelet <==
	Jul 09 17:26:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 17:27:05 ha-400600 kubelet[2377]: E0709 17:27:05.127055    2377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:27:05 ha-400600 kubelet[2377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:27:05 ha-400600 kubelet[2377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:27:05 ha-400600 kubelet[2377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:27:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: I0709 17:27:59.412130    2377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zbxnq" podStartSLOduration=521.412105809 podStartE2EDuration="8m41.412105809s" podCreationTimestamp="2024-07-09 17:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-09 17:19:31.27494381 +0000 UTC m=+26.403578235" watchObservedRunningTime="2024-07-09 17:27:59.412105809 +0000 UTC m=+534.540740234"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: I0709 17:27:59.412482    2377 topology_manager.go:215] "Topology Admit Handler" podUID="4dd108eb-d6a3-4a30-98a2-72ef6fdb4415" podNamespace="default" podName="busybox-fc5497c4f-q8dt8"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: I0709 17:27:59.486851    2377 topology_manager.go:215] "Topology Admit Handler" podUID="d30ad095-5023-480c-81e2-a981020fe32f" podNamespace="default" podName="busybox-fc5497c4f-vw2bt"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: I0709 17:27:59.564768    2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhn8\" (UniqueName: \"kubernetes.io/projected/4dd108eb-d6a3-4a30-98a2-72ef6fdb4415-kube-api-access-4nhn8\") pod \"busybox-fc5497c4f-q8dt8\" (UID: \"4dd108eb-d6a3-4a30-98a2-72ef6fdb4415\") " pod="default/busybox-fc5497c4f-q8dt8"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: I0709 17:27:59.564884    2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdf8v\" (UniqueName: \"kubernetes.io/projected/d30ad095-5023-480c-81e2-a981020fe32f-kube-api-access-fdf8v\") pod \"busybox-fc5497c4f-vw2bt\" (UID: \"d30ad095-5023-480c-81e2-a981020fe32f\") " pod="default/busybox-fc5497c4f-vw2bt"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: E0709 17:27:59.572258    2377 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-fdf8v], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-fc5497c4f-vw2bt" podUID="d30ad095-5023-480c-81e2-a981020fe32f"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: I0709 17:27:59.673934    2377 status_manager.go:877] "Failed to update status for pod" pod="default/busybox-fc5497c4f-vw2bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30ad095-5023-480c-81e2-a981020fe32f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-09T17:27:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-09T17:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-09T17:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [busybox]\\\",\\\"reason\\\":\\\"Cont
ainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-07-09T17:27:59Z\\\",\\\"message\\\":\\\"containers with unready status: [busybox]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"gcr.io/k8s-minikube/busybox:1.28\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"busybox\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"172.18.204.161\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"172.18.204.161\\\"}],\\\"startTime\\\":\\\"2024-07-09T17:27:59Z\\\"}}\" for pod \"default\"/\"busybox-fc5497c4f-vw2bt\": pods \"busybox-fc5497c4f-vw2bt\" not found"
	Jul 09 17:27:59 ha-400600 kubelet[2377]: E0709 17:27:59.675773    2377 projected.go:200] Error preparing data for projected volume kube-api-access-fdf8v for pod default/busybox-fc5497c4f-vw2bt: failed to fetch token: pod "busybox-fc5497c4f-vw2bt" not found
	Jul 09 17:27:59 ha-400600 kubelet[2377]: E0709 17:27:59.675912    2377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d30ad095-5023-480c-81e2-a981020fe32f-kube-api-access-fdf8v podName:d30ad095-5023-480c-81e2-a981020fe32f nodeName:}" failed. No retries permitted until 2024-07-09 17:28:00.175885709 +0000 UTC m=+535.304520134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fdf8v" (UniqueName: "kubernetes.io/projected/d30ad095-5023-480c-81e2-a981020fe32f-kube-api-access-fdf8v") pod "busybox-fc5497c4f-vw2bt" (UID: "d30ad095-5023-480c-81e2-a981020fe32f") : failed to fetch token: pod "busybox-fc5497c4f-vw2bt" not found
	Jul 09 17:28:00 ha-400600 kubelet[2377]: E0709 17:28:00.273966    2377 projected.go:200] Error preparing data for projected volume kube-api-access-fdf8v for pod default/busybox-fc5497c4f-vw2bt: failed to fetch token: pod "busybox-fc5497c4f-vw2bt" not found
	Jul 09 17:28:00 ha-400600 kubelet[2377]: E0709 17:28:00.274788    2377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d30ad095-5023-480c-81e2-a981020fe32f-kube-api-access-fdf8v podName:d30ad095-5023-480c-81e2-a981020fe32f nodeName:}" failed. No retries permitted until 2024-07-09 17:28:01.274715788 +0000 UTC m=+536.403350113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fdf8v" (UniqueName: "kubernetes.io/projected/d30ad095-5023-480c-81e2-a981020fe32f-kube-api-access-fdf8v") pod "busybox-fc5497c4f-vw2bt" (UID: "d30ad095-5023-480c-81e2-a981020fe32f") : failed to fetch token: pod "busybox-fc5497c4f-vw2bt" not found
	Jul 09 17:28:00 ha-400600 kubelet[2377]: I0709 17:28:00.553024    2377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe8ea0c55db7cfbdee4483c64424b22daabd3958e9bb8b585b18251b610b05f1"
	Jul 09 17:28:00 ha-400600 kubelet[2377]: I0709 17:28:00.674953    2377 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fdf8v\" (UniqueName: \"kubernetes.io/projected/d30ad095-5023-480c-81e2-a981020fe32f-kube-api-access-fdf8v\") on node \"ha-400600\" DevicePath \"\""
	Jul 09 17:28:01 ha-400600 kubelet[2377]: I0709 17:28:01.069992    2377 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30ad095-5023-480c-81e2-a981020fe32f" path="/var/lib/kubelet/pods/d30ad095-5023-480c-81e2-a981020fe32f/volumes"
	Jul 09 17:28:05 ha-400600 kubelet[2377]: E0709 17:28:05.123543    2377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:28:05 ha-400600 kubelet[2377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:28:05 ha-400600 kubelet[2377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:28:05 ha-400600 kubelet[2377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:28:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:28:55.697414   10316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-400600 -n ha-400600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-400600 -n ha-400600: (12.6833065s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-400600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (68.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (102.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 node stop m02 -v=7 --alsologtostderr
E0709 10:45:13.298828   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 10:45:30.095259   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 node stop m02 -v=7 --alsologtostderr: (35.5962294s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-400600 status -v=7 --alsologtostderr: exit status 1 (31.5578206s)

                                                
                                                
** stderr ** 
	W0709 10:45:31.225147    9248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 10:45:31.233502    9248 out.go:291] Setting OutFile to fd 1648 ...
	I0709 10:45:31.233502    9248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:45:31.233502    9248 out.go:304] Setting ErrFile to fd 1108...
	I0709 10:45:31.234581    9248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:45:31.256053    9248 out.go:298] Setting JSON to false
	I0709 10:45:31.256053    9248 mustload.go:65] Loading cluster: ha-400600
	I0709 10:45:31.256053    9248 notify.go:220] Checking for updates...
	I0709 10:45:31.256869    9248 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:45:31.256869    9248 status.go:255] checking status of ha-400600 ...
	I0709 10:45:31.258391    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:45:33.583625    9248 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:45:33.583625    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:33.583733    9248 status.go:330] ha-400600 host status = "Running" (err=<nil>)
	I0709 10:45:33.583932    9248 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:45:33.584863    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:45:35.898148    9248 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:45:35.898453    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:35.898453    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:45:38.672246    9248 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:45:38.672246    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:38.672246    9248 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:45:38.686210    9248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 10:45:38.686210    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:45:41.042120    9248 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:45:41.042681    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:41.042769    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:45:43.772497    9248 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:45:43.772497    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:43.772799    9248 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:45:43.881113    9248 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1948882s)
	I0709 10:45:43.896082    9248 ssh_runner.go:195] Run: systemctl --version
	I0709 10:45:43.929083    9248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:45:43.955906    9248 kubeconfig.go:125] found "ha-400600" server: "https://172.18.207.254:8443"
	I0709 10:45:43.955906    9248 api_server.go:166] Checking apiserver status ...
	I0709 10:45:43.967420    9248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:45:44.014065    9248 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2228/cgroup
	W0709 10:45:44.033920    9248 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2228/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 10:45:44.045980    9248 ssh_runner.go:195] Run: ls
	I0709 10:45:44.052556    9248 api_server.go:253] Checking apiserver healthz at https://172.18.207.254:8443/healthz ...
	I0709 10:45:44.063205    9248 api_server.go:279] https://172.18.207.254:8443/healthz returned 200:
	ok
	I0709 10:45:44.063205    9248 status.go:422] ha-400600 apiserver status = Running (err=<nil>)
	I0709 10:45:44.063205    9248 status.go:257] ha-400600 status: &{Name:ha-400600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 10:45:44.063205    9248 status.go:255] checking status of ha-400600-m02 ...
	I0709 10:45:44.063771    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:45:46.224386    9248 main.go:141] libmachine: [stdout =====>] : Off
	
	I0709 10:45:46.224846    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:46.224846    9248 status.go:330] ha-400600-m02 host status = "Stopped" (err=<nil>)
	I0709 10:45:46.224846    9248 status.go:343] host is not running, skipping remaining checks
	I0709 10:45:46.224846    9248 status.go:257] ha-400600-m02 status: &{Name:ha-400600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 10:45:46.224846    9248 status.go:255] checking status of ha-400600-m03 ...
	I0709 10:45:46.225657    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:45:48.411312    9248 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:45:48.411312    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:48.412083    9248 status.go:330] ha-400600-m03 host status = "Running" (err=<nil>)
	I0709 10:45:48.412083    9248 host.go:66] Checking if "ha-400600-m03" exists ...
	I0709 10:45:48.412781    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:45:50.676754    9248 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:45:50.676754    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:50.676754    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:45:53.286084    9248 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:45:53.286809    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:53.286809    9248 host.go:66] Checking if "ha-400600-m03" exists ...
	I0709 10:45:53.299155    9248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 10:45:53.299155    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:45:55.529393    9248 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:45:55.529714    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:55.529792    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:45:58.175988    9248 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:45:58.175988    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:45:58.176713    9248 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:45:58.278192    9248 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9790228s)
	I0709 10:45:58.290916    9248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:45:58.317617    9248 kubeconfig.go:125] found "ha-400600" server: "https://172.18.207.254:8443"
	I0709 10:45:58.317617    9248 api_server.go:166] Checking apiserver status ...
	I0709 10:45:58.330142    9248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:45:58.369984    9248 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2419/cgroup
	W0709 10:45:58.388654    9248 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2419/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 10:45:58.400546    9248 ssh_runner.go:195] Run: ls
	I0709 10:45:58.408266    9248 api_server.go:253] Checking apiserver healthz at https://172.18.207.254:8443/healthz ...
	I0709 10:45:58.418957    9248 api_server.go:279] https://172.18.207.254:8443/healthz returned 200:
	ok
	I0709 10:45:58.418957    9248 status.go:422] ha-400600-m03 apiserver status = Running (err=<nil>)
	I0709 10:45:58.418957    9248 status.go:257] ha-400600-m03 status: &{Name:ha-400600-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 10:45:58.419354    9248 status.go:255] checking status of ha-400600-m04 ...
	I0709 10:45:58.419972    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m04 ).state
	I0709 10:46:00.635149    9248 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:46:00.636399    9248 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:46:00.636471    9248 status.go:330] ha-400600-m04 host status = "Running" (err=<nil>)
	I0709 10:46:00.636471    9248 host.go:66] Checking if "ha-400600-m04" exists ...
	I0709 10:46:00.637555    9248 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m04 ).state

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-400600 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-400600 -n ha-400600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-400600 -n ha-400600: (12.5380312s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 logs -n 25: (8.9007674s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:40 PDT | 09 Jul 24 10:40 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:40 PDT | 09 Jul 24 10:40 PDT |
	|         | ha-400600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:40 PDT | 09 Jul 24 10:40 PDT |
	|         | ha-400600:/home/docker/cp-test_ha-400600-m03_ha-400600.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:40 PDT | 09 Jul 24 10:41 PDT |
	|         | ha-400600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n ha-400600 sudo cat                                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:41 PDT | 09 Jul 24 10:41 PDT |
	|         | /home/docker/cp-test_ha-400600-m03_ha-400600.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:41 PDT | 09 Jul 24 10:41 PDT |
	|         | ha-400600-m02:/home/docker/cp-test_ha-400600-m03_ha-400600-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:41 PDT | 09 Jul 24 10:41 PDT |
	|         | ha-400600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n ha-400600-m02 sudo cat                                                                                   | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:41 PDT | 09 Jul 24 10:41 PDT |
	|         | /home/docker/cp-test_ha-400600-m03_ha-400600-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:41 PDT | 09 Jul 24 10:42 PDT |
	|         | ha-400600-m04:/home/docker/cp-test_ha-400600-m03_ha-400600-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:42 PDT | 09 Jul 24 10:42 PDT |
	|         | ha-400600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n ha-400600-m04 sudo cat                                                                                   | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:42 PDT | 09 Jul 24 10:42 PDT |
	|         | /home/docker/cp-test_ha-400600-m03_ha-400600-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-400600 cp testdata\cp-test.txt                                                                                         | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:42 PDT | 09 Jul 24 10:42 PDT |
	|         | ha-400600-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:42 PDT | 09 Jul 24 10:42 PDT |
	|         | ha-400600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:42 PDT | 09 Jul 24 10:42 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:42 PDT | 09 Jul 24 10:43 PDT |
	|         | ha-400600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:43 PDT | 09 Jul 24 10:43 PDT |
	|         | ha-400600:/home/docker/cp-test_ha-400600-m04_ha-400600.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:43 PDT | 09 Jul 24 10:43 PDT |
	|         | ha-400600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n ha-400600 sudo cat                                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:43 PDT | 09 Jul 24 10:43 PDT |
	|         | /home/docker/cp-test_ha-400600-m04_ha-400600.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:43 PDT | 09 Jul 24 10:43 PDT |
	|         | ha-400600-m02:/home/docker/cp-test_ha-400600-m04_ha-400600-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:43 PDT | 09 Jul 24 10:44 PDT |
	|         | ha-400600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n ha-400600-m02 sudo cat                                                                                   | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:44 PDT | 09 Jul 24 10:44 PDT |
	|         | /home/docker/cp-test_ha-400600-m04_ha-400600-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt                                                                       | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:44 PDT | 09 Jul 24 10:44 PDT |
	|         | ha-400600-m03:/home/docker/cp-test_ha-400600-m04_ha-400600-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n                                                                                                          | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:44 PDT | 09 Jul 24 10:44 PDT |
	|         | ha-400600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-400600 ssh -n ha-400600-m03 sudo cat                                                                                   | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:44 PDT | 09 Jul 24 10:44 PDT |
	|         | /home/docker/cp-test_ha-400600-m04_ha-400600-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-400600 node stop m02 -v=7                                                                                              | ha-400600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:44 PDT | 09 Jul 24 10:45 PDT |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 10:16:02
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 10:16:02.755734    6700 out.go:291] Setting OutFile to fd 1532 ...
	I0709 10:16:02.756323    6700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:16:02.756323    6700 out.go:304] Setting ErrFile to fd 1372...
	I0709 10:16:02.756323    6700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:16:02.781053    6700 out.go:298] Setting JSON to false
	I0709 10:16:02.782532    6700 start.go:129] hostinfo: {"hostname":"minikube1","uptime":3631,"bootTime":1720541731,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 10:16:02.782532    6700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 10:16:02.796698    6700 out.go:177] * [ha-400600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 10:16:02.800534    6700 notify.go:220] Checking for updates...
	I0709 10:16:02.803091    6700 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:16:02.804820    6700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 10:16:02.807779    6700 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 10:16:02.814808    6700 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 10:16:02.818871    6700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 10:16:02.821273    6700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 10:16:08.035987    6700 out.go:177] * Using the hyperv driver based on user configuration
	I0709 10:16:08.039729    6700 start.go:297] selected driver: hyperv
	I0709 10:16:08.039729    6700 start.go:901] validating driver "hyperv" against <nil>
	I0709 10:16:08.039729    6700 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 10:16:08.086300    6700 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 10:16:08.088400    6700 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 10:16:08.088400    6700 cni.go:84] Creating CNI manager for ""
	I0709 10:16:08.088400    6700 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 10:16:08.088400    6700 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 10:16:08.088400    6700 start.go:340] cluster config:
	{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:16:08.089479    6700 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 10:16:08.097177    6700 out.go:177] * Starting "ha-400600" primary control-plane node in "ha-400600" cluster
	I0709 10:16:08.102857    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:16:08.102857    6700 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 10:16:08.102857    6700 cache.go:56] Caching tarball of preloaded images
	I0709 10:16:08.103408    6700 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 10:16:08.103655    6700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 10:16:08.104197    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:16:08.104197    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json: {Name:mkd46017acd4713454e4339419b70af7bfbb4b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:16:08.105617    6700 start.go:360] acquireMachinesLock for ha-400600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 10:16:08.105617    6700 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-400600"
	I0709 10:16:08.105617    6700 start.go:93] Provisioning new machine with config: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:16:08.106218    6700 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 10:16:08.111683    6700 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 10:16:08.112335    6700 start.go:159] libmachine.API.Create for "ha-400600" (driver="hyperv")
	I0709 10:16:08.112335    6700 client.go:168] LocalClient.Create starting
	I0709 10:16:08.112528    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 10:16:08.113194    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:16:08.113237    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:16:08.113489    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 10:16:08.113736    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:16:08.113736    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:16:08.113736    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 10:16:10.121839    6700 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 10:16:10.124605    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:10.124689    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 10:16:11.863932    6700 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 10:16:11.863932    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:11.864030    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:16:13.256383    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:16:13.256476    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:13.256476    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:16:16.660501    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:16:16.672572    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:16.675158    6700 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 10:16:17.188138    6700 main.go:141] libmachine: Creating SSH key...
	I0709 10:16:17.276605    6700 main.go:141] libmachine: Creating VM...
	I0709 10:16:17.276605    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:16:19.966362    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:16:19.966362    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:19.966362    6700 main.go:141] libmachine: Using switch "Default Switch"
	I0709 10:16:19.978661    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:16:21.612793    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:16:21.620808    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:21.620808    6700 main.go:141] libmachine: Creating VHD
	I0709 10:16:21.621045    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 10:16:25.332547    6700 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7D808E41-B9EE-446B-95C5-A2188640DBA0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 10:16:25.332716    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:25.333051    6700 main.go:141] libmachine: Writing magic tar header
	I0709 10:16:25.333119    6700 main.go:141] libmachine: Writing SSH key tar header
	I0709 10:16:25.344742    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 10:16:28.542978    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:28.555344    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:28.555344    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\disk.vhd' -SizeBytes 20000MB
	I0709 10:16:31.127986    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:31.127986    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:31.139453    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-400600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 10:16:34.712385    6700 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-400600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 10:16:34.712451    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:34.712451    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-400600 -DynamicMemoryEnabled $false
	I0709 10:16:36.921106    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:36.921389    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:36.921389    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-400600 -Count 2
	I0709 10:16:39.096299    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:39.096299    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:39.096497    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-400600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\boot2docker.iso'
	I0709 10:16:41.581345    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:41.594208    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:41.594208    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-400600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\disk.vhd'
	I0709 10:16:44.138028    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:44.138028    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:44.138028    6700 main.go:141] libmachine: Starting VM...
	I0709 10:16:44.149701    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-400600
	I0709 10:16:47.183394    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:47.183394    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:47.183394    6700 main.go:141] libmachine: Waiting for host to start...
	I0709 10:16:47.183394    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:16:49.462357    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:16:49.462409    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:49.462526    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:16:51.942938    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:51.942938    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:52.959645    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:16:55.140390    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:16:55.140390    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:55.150751    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:16:57.651076    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:16:57.651180    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:16:58.665138    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:00.845849    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:00.849421    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:00.849580    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:03.302240    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:17:03.309323    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:04.320576    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:06.563997    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:06.564385    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:06.564385    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:09.032272    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:17:09.043856    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:10.048655    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:12.202466    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:12.202466    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:12.213773    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:14.649305    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:14.649305    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:14.660701    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:16.666623    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:16.666623    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:16.680075    6700 machine.go:94] provisionDockerMachine start ...
	I0709 10:17:16.680196    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:18.718694    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:18.718694    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:18.723268    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:21.151956    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:21.151956    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:21.158615    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:21.166747    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:21.166747    6700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 10:17:21.309374    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 10:17:21.309446    6700 buildroot.go:166] provisioning hostname "ha-400600"
	I0709 10:17:21.309446    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:23.340531    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:23.340531    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:23.354344    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:25.778007    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:25.778007    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:25.783510    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:25.783956    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:25.783956    6700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-400600 && echo "ha-400600" | sudo tee /etc/hostname
	I0709 10:17:25.938090    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-400600
	
	I0709 10:17:25.938188    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:27.930864    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:27.930864    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:27.943013    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:30.379360    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:30.379360    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:30.385515    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:30.385515    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:30.385515    6700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-400600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-400600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-400600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 10:17:30.529380    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:17:30.529380    6700 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 10:17:30.529380    6700 buildroot.go:174] setting up certificates
	I0709 10:17:30.529380    6700 provision.go:84] configureAuth start
	I0709 10:17:30.529380    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:32.547259    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:32.558673    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:32.558805    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:34.975911    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:34.987287    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:34.987287    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:36.980820    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:36.980820    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:36.991091    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:39.523378    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:39.523378    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:39.534939    6700 provision.go:143] copyHostCerts
	I0709 10:17:39.535158    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 10:17:39.535537    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 10:17:39.535537    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 10:17:39.535902    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 10:17:39.537314    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 10:17:39.537461    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 10:17:39.537461    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 10:17:39.537995    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 10:17:39.538912    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 10:17:39.538912    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 10:17:39.539445    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 10:17:39.539901    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 10:17:39.541051    6700 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-400600 san=[127.0.0.1 172.18.204.161 ha-400600 localhost minikube]
	I0709 10:17:39.804727    6700 provision.go:177] copyRemoteCerts
	I0709 10:17:39.835159    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 10:17:39.835159    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:41.854879    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:41.866183    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:41.866506    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:44.227384    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:44.227384    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:44.241571    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:17:44.348653    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5134839s)
	I0709 10:17:44.348653    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 10:17:44.349576    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 10:17:44.391304    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 10:17:44.391502    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0709 10:17:44.434730    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 10:17:44.435311    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 10:17:44.477867    6700 provision.go:87] duration metric: took 13.9484564s to configureAuth
	I0709 10:17:44.478026    6700 buildroot.go:189] setting minikube options for container-runtime
	I0709 10:17:44.478981    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:17:44.479218    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:46.510643    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:46.510643    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:46.523587    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:48.971159    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:48.971159    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:48.988688    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:48.989278    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:48.989418    6700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 10:17:49.123099    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 10:17:49.123099    6700 buildroot.go:70] root file system type: tmpfs
	I0709 10:17:49.123441    6700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 10:17:49.123524    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:51.153888    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:51.165482    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:51.165482    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:53.534359    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:53.546598    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:53.552181    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:53.552933    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:53.552933    6700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 10:17:53.703587    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 10:17:53.703587    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:17:55.737929    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:17:55.738046    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:55.738046    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:17:58.144661    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:17:58.144661    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:17:58.150610    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:17:58.151350    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:17:58.151350    6700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 10:18:00.274769    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 10:18:00.274769    6700 machine.go:97] duration metric: took 43.5945976s to provisionDockerMachine
	I0709 10:18:00.274769    6700 client.go:171] duration metric: took 1m52.1621887s to LocalClient.Create
	I0709 10:18:00.274891    6700 start.go:167] duration metric: took 1m52.1623108s to libmachine.API.Create "ha-400600"
	I0709 10:18:00.274891    6700 start.go:293] postStartSetup for "ha-400600" (driver="hyperv")
	I0709 10:18:00.274976    6700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 10:18:00.285971    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 10:18:00.285971    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:02.341060    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:02.341060    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:02.341060    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:04.858604    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:04.858672    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:04.858672    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:18:04.978651    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6926698s)
	I0709 10:18:04.990287    6700 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 10:18:04.993635    6700 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 10:18:04.993635    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 10:18:04.999257    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 10:18:04.999579    6700 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 10:18:04.999579    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 10:18:05.016246    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 10:18:05.035000    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 10:18:05.080190    6700 start.go:296] duration metric: took 4.8052884s for postStartSetup
	I0709 10:18:05.083441    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:07.109669    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:07.109669    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:07.109870    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:09.511461    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:09.511461    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:09.511461    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:18:09.526873    6700 start.go:128] duration metric: took 2m1.4203894s to createHost
	I0709 10:18:09.526873    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:11.544559    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:11.555529    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:11.555529    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:13.940961    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:13.952666    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:13.958216    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:18:13.958216    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:18:13.958818    6700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 10:18:14.089176    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720545494.094996238
	
	I0709 10:18:14.089246    6700 fix.go:216] guest clock: 1720545494.094996238
	I0709 10:18:14.089246    6700 fix.go:229] Guest: 2024-07-09 10:18:14.094996238 -0700 PDT Remote: 2024-07-09 10:18:09.5268731 -0700 PDT m=+126.869214101 (delta=4.568123138s)
	I0709 10:18:14.089374    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:16.125196    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:16.125196    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:16.135644    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:18.543707    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:18.554588    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:18.560685    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:18:18.560902    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.161 22 <nil> <nil>}
	I0709 10:18:18.560902    6700 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720545494
	I0709 10:18:18.699178    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 17:18:14 UTC 2024
	
	I0709 10:18:18.699178    6700 fix.go:236] clock set: Tue Jul  9 17:18:14 UTC 2024
	 (err=<nil>)
	I0709 10:18:18.699178    6700 start.go:83] releasing machines lock for "ha-400600", held for 2m10.5932749s
	I0709 10:18:18.699178    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:20.750622    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:20.762271    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:20.762271    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:23.245945    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:23.245945    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:23.261448    6700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 10:18:23.261599    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:23.270975    6700 ssh_runner.go:195] Run: cat /version.json
	I0709 10:18:23.270975    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:25.494825    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:18:25.495129    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:25.495285    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:18:28.071630    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:28.071773    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:28.072033    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:18:28.083910    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:18:28.083910    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:18:28.088997    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:18:28.253218    6700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9916846s)
	I0709 10:18:28.254582    6700 ssh_runner.go:235] Completed: cat /version.json: (4.9835953s)
	I0709 10:18:28.267398    6700 ssh_runner.go:195] Run: systemctl --version
	I0709 10:18:28.287704    6700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0709 10:18:28.296624    6700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 10:18:28.308282    6700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 10:18:28.351681    6700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 10:18:28.351681    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:18:28.351681    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:18:28.401308    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 10:18:28.430815    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 10:18:28.452144    6700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 10:18:28.464240    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 10:18:28.498622    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:18:28.528662    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 10:18:28.563632    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:18:28.592490    6700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 10:18:28.625044    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 10:18:28.655962    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 10:18:28.686604    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 10:18:28.718208    6700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 10:18:28.748482    6700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 10:18:28.783522    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:28.968252    6700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 10:18:28.998812    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:18:29.013344    6700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 10:18:29.051240    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:18:29.082624    6700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 10:18:29.132080    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:18:29.164879    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:18:29.203809    6700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 10:18:29.265871    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:18:29.289323    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:18:29.332078    6700 ssh_runner.go:195] Run: which cri-dockerd
	I0709 10:18:29.351028    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 10:18:29.368258    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 10:18:29.410302    6700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 10:18:29.591968    6700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 10:18:29.769531    6700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 10:18:29.769797    6700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 10:18:29.820076    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:30.007600    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:18:32.584936    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.577331s)
	I0709 10:18:32.595944    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 10:18:32.632112    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:18:32.665353    6700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 10:18:32.862030    6700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 10:18:33.042564    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:33.236992    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 10:18:33.284318    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:18:33.318192    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:33.516188    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 10:18:33.616539    6700 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 10:18:33.628081    6700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 10:18:33.638821    6700 start.go:562] Will wait 60s for crictl version
	I0709 10:18:33.650086    6700 ssh_runner.go:195] Run: which crictl
	I0709 10:18:33.668004    6700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 10:18:33.720319    6700 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 10:18:33.730425    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:18:33.777123    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:18:33.808693    6700 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 10:18:33.808902    6700 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 10:18:33.812731    6700 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 10:18:33.815959    6700 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 10:18:33.815959    6700 ip.go:210] interface addr: 172.18.192.1/20
	I0709 10:18:33.822040    6700 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 10:18:33.828624    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:18:33.864539    6700 kubeadm.go:877] updating cluster {Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 10:18:33.864539    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:18:33.875617    6700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 10:18:33.897225    6700 docker.go:685] Got preloaded images: 
	I0709 10:18:33.897225    6700 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 10:18:33.909232    6700 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 10:18:33.940216    6700 ssh_runner.go:195] Run: which lz4
	I0709 10:18:33.946436    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 10:18:33.957436    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 10:18:33.965688    6700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 10:18:33.965907    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 10:18:36.339183    6700 docker.go:649] duration metric: took 2.3922949s to copy over tarball
	I0709 10:18:36.350129    6700 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 10:18:44.724750    6700 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3746026s)
	I0709 10:18:44.724863    6700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 10:18:44.804514    6700 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 10:18:44.827709    6700 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 10:18:44.869022    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:45.085069    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:18:48.726946    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6418689s)
	I0709 10:18:48.737009    6700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 10:18:48.768415    6700 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 10:18:48.768495    6700 cache_images.go:84] Images are preloaded, skipping loading
	I0709 10:18:48.768590    6700 kubeadm.go:928] updating node { 172.18.204.161 8443 v1.30.2 docker true true} ...
	I0709 10:18:48.768859    6700 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-400600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.204.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 10:18:48.778722    6700 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 10:18:48.810409    6700 cni.go:84] Creating CNI manager for ""
	I0709 10:18:48.810503    6700 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 10:18:48.810544    6700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 10:18:48.810595    6700 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.204.161 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-400600 NodeName:ha-400600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.204.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.204.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 10:18:48.810943    6700 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.204.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-400600"
	  kubeletExtraArgs:
	    node-ip: 172.18.204.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.204.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 10:18:48.811017    6700 kube-vip.go:115] generating kube-vip config ...
	I0709 10:18:48.823921    6700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0709 10:18:48.847812    6700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0709 10:18:48.848004    6700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0709 10:18:48.862161    6700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 10:18:48.879558    6700 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 10:18:48.891472    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0709 10:18:48.910116    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0709 10:18:48.940154    6700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 10:18:48.968982    6700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0709 10:18:48.998305    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0709 10:18:49.037911    6700 ssh_runner.go:195] Run: grep 172.18.207.254	control-plane.minikube.internal$ /etc/hosts
	I0709 10:18:49.046289    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:18:49.081649    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:18:49.265964    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:18:49.298848    6700 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600 for IP: 172.18.204.161
	I0709 10:18:49.298848    6700 certs.go:194] generating shared ca certs ...
	I0709 10:18:49.298967    6700 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.299532    6700 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 10:18:49.300389    6700 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 10:18:49.300576    6700 certs.go:256] generating profile certs ...
	I0709 10:18:49.301344    6700 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key
	I0709 10:18:49.301525    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.crt with IP's: []
	I0709 10:18:49.441961    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.crt ...
	I0709 10:18:49.441961    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.crt: {Name:mka7233808da0cc81632207b9cdb68c316f32895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.448722    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key ...
	I0709 10:18:49.448722    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key: {Name:mk92fb6d80beea0dec3e1f38459a29efbebff793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.450331    6700 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4
	I0709 10:18:49.450331    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.204.161 172.18.207.254]
	I0709 10:18:49.588257    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4 ...
	I0709 10:18:49.588257    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4: {Name:mkdeea7a9e8afe19683dfc98b89e22e9ca2d0712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.593630    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4 ...
	I0709 10:18:49.593630    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4: {Name:mk7249980be063f719f37f8a47747048fcd9bda7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.594856    6700 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.e33266d4 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt
	I0709 10:18:49.608854    6700 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.e33266d4 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key
	I0709 10:18:49.610432    6700 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key
	I0709 10:18:49.610584    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt with IP's: []
	I0709 10:18:49.837821    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt ...
	I0709 10:18:49.837821    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt: {Name:mke8951db5c0b1a6a0535481591e54fe9476f99c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.838328    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key ...
	I0709 10:18:49.838328    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key: {Name:mkdf05987e1446dc8d4c051f44a8aded138f8ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:18:49.839847    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 10:18:49.840869    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 10:18:49.841052    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 10:18:49.841263    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 10:18:49.841462    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 10:18:49.841644    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 10:18:49.841817    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 10:18:49.852866    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 10:18:49.853154    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 10:18:49.854671    6700 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 10:18:49.854671    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 10:18:49.854920    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 10:18:49.855467    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 10:18:49.855751    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 10:18:49.856146    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 10:18:49.856146    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 10:18:49.856909    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 10:18:49.856909    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:49.857562    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 10:18:49.904005    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 10:18:49.948445    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 10:18:49.996678    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 10:18:50.042295    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 10:18:50.084529    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 10:18:50.146108    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 10:18:50.196397    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 10:18:50.231703    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 10:18:50.280271    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 10:18:50.325896    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 10:18:50.368265    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 10:18:50.412784    6700 ssh_runner.go:195] Run: openssl version
	I0709 10:18:50.433529    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 10:18:50.466312    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:50.473337    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:50.486361    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:18:50.504540    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 10:18:50.537329    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 10:18:50.570308    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 10:18:50.573304    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:18:50.579113    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 10:18:50.607993    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 10:18:50.640829    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 10:18:50.672344    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 10:18:50.675802    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:18:50.690315    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 10:18:50.712157    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 10:18:50.743720    6700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:18:50.751665    6700 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 10:18:50.752123    6700 kubeadm.go:391] StartCluster: {Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clu
sterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:18:50.760948    6700 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 10:18:50.798428    6700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 10:18:50.829005    6700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 10:18:50.860662    6700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 10:18:50.874648    6700 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 10:18:50.874648    6700 kubeadm.go:156] found existing configuration files:
	
	I0709 10:18:50.892883    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 10:18:50.905920    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 10:18:50.918204    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 10:18:50.945478    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 10:18:50.959142    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 10:18:50.970041    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 10:18:50.997544    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 10:18:51.013191    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 10:18:51.030253    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 10:18:51.060115    6700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 10:18:51.078035    6700 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 10:18:51.090874    6700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 10:18:51.107417    6700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 10:18:51.501898    6700 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 10:19:05.600789    6700 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 10:19:05.600789    6700 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 10:19:05.600789    6700 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 10:19:05.601338    6700 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 10:19:05.601664    6700 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 10:19:05.601895    6700 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 10:19:05.608187    6700 out.go:204]   - Generating certificates and keys ...
	I0709 10:19:05.608187    6700 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 10:19:05.608187    6700 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 10:19:05.608911    6700 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 10:19:05.609443    6700 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 10:19:05.609623    6700 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-400600 localhost] and IPs [172.18.204.161 127.0.0.1 ::1]
	I0709 10:19:05.609623    6700 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 10:19:05.610191    6700 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-400600 localhost] and IPs [172.18.204.161 127.0.0.1 ::1]
	I0709 10:19:05.610357    6700 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 10:19:05.610415    6700 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 10:19:05.610415    6700 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 10:19:05.610415    6700 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 10:19:05.611745    6700 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 10:19:05.611842    6700 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 10:19:05.615239    6700 out.go:204]   - Booting up control plane ...
	I0709 10:19:05.615377    6700 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 10:19:05.615377    6700 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 10:19:05.615377    6700 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 10:19:05.616215    6700 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 10:19:05.616409    6700 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.0021856s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [api-check] The API server is healthy after 7.502453028s
	I0709 10:19:05.616409    6700 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 10:19:05.616409    6700 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 10:19:05.616409    6700 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 10:19:05.616409    6700 kubeadm.go:309] [mark-control-plane] Marking the node ha-400600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 10:19:05.616409    6700 kubeadm.go:309] [bootstrap-token] Using token: zh32lj.urxnr10p0ojd6j1h
	I0709 10:19:05.621477    6700 out.go:204]   - Configuring RBAC rules ...
	I0709 10:19:05.621948    6700 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 10:19:05.621948    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 10:19:05.622575    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 10:19:05.622828    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 10:19:05.622828    6700 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 10:19:05.623497    6700 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 10:19:05.623632    6700 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 10:19:05.623632    6700 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 10:19:05.623632    6700 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 10:19:05.623632    6700 kubeadm.go:309] 
	I0709 10:19:05.623632    6700 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 10:19:05.623632    6700 kubeadm.go:309] 
	I0709 10:19:05.624340    6700 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 10:19:05.624340    6700 kubeadm.go:309] 
	I0709 10:19:05.624432    6700 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 10:19:05.624859    6700 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 10:19:05.625002    6700 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 10:19:05.625002    6700 kubeadm.go:309] 
	I0709 10:19:05.625198    6700 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 10:19:05.625259    6700 kubeadm.go:309] 
	I0709 10:19:05.625259    6700 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 10:19:05.625259    6700 kubeadm.go:309] 
	I0709 10:19:05.625259    6700 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 10:19:05.625259    6700 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 10:19:05.625846    6700 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 10:19:05.625846    6700 kubeadm.go:309] 
	I0709 10:19:05.626072    6700 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 10:19:05.626440    6700 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 10:19:05.626440    6700 kubeadm.go:309] 
	I0709 10:19:05.626778    6700 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zh32lj.urxnr10p0ojd6j1h \
	I0709 10:19:05.630822    6700 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 10:19:05.630822    6700 kubeadm.go:309] 	--control-plane 
	I0709 10:19:05.630822    6700 kubeadm.go:309] 
	I0709 10:19:05.630822    6700 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 10:19:05.630822    6700 kubeadm.go:309] 
	I0709 10:19:05.631527    6700 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zh32lj.urxnr10p0ojd6j1h \
	I0709 10:19:05.631739    6700 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 10:19:05.631872    6700 cni.go:84] Creating CNI manager for ""
	I0709 10:19:05.631872    6700 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 10:19:05.634892    6700 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 10:19:05.648230    6700 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 10:19:05.659012    6700 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 10:19:05.659067    6700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 10:19:05.707277    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 10:19:06.378276    6700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 10:19:06.392530    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-400600 minikube.k8s.io/updated_at=2024_07_09T10_19_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=ha-400600 minikube.k8s.io/primary=true
	I0709 10:19:06.392530    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:06.410536    6700 ops.go:34] apiserver oom_adj: -16
	I0709 10:19:06.571931    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:07.073723    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:07.582109    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:08.083450    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:08.588699    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:09.085619    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:09.574693    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:10.076173    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:10.580430    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:11.084027    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:11.589064    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:12.075414    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:12.583098    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:13.094890    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:13.584700    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:14.077349    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:14.583923    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:15.083743    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:15.582350    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:16.080505    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:16.583413    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:17.075784    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:17.577060    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:18.081648    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:18.583111    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:19.082266    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 10:19:19.211669    6700 kubeadm.go:1107] duration metric: took 12.8331464s to wait for elevateKubeSystemPrivileges
	W0709 10:19:19.211767    6700 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 10:19:19.211836    6700 kubeadm.go:393] duration metric: took 28.4595816s to StartCluster
	I0709 10:19:19.211836    6700 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:19:19.212088    6700 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:19:19.214037    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:19:19.215560    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 10:19:19.215623    6700 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:19:19.215623    6700 start.go:240] waiting for startup goroutines ...
	I0709 10:19:19.215623    6700 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 10:19:19.215623    6700 addons.go:69] Setting storage-provisioner=true in profile "ha-400600"
	I0709 10:19:19.215623    6700 addons.go:69] Setting default-storageclass=true in profile "ha-400600"
	I0709 10:19:19.215623    6700 addons.go:234] Setting addon storage-provisioner=true in "ha-400600"
	I0709 10:19:19.215623    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:19:19.216231    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:19:19.215623    6700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-400600"
	I0709 10:19:19.217293    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:19.217897    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:19.389113    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 10:19:19.803341    6700 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 10:19:21.526158    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:21.526158    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:21.526158    6700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:19:21.529180    6700 kapi.go:59] client config for ha-400600: &rest.Config{Host:"https://172.18.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 10:19:21.531019    6700 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 10:19:21.531395    6700 addons.go:234] Setting addon default-storageclass=true in "ha-400600"
	I0709 10:19:21.531510    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:19:21.532672    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:21.539846    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:21.539909    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:21.543245    6700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 10:19:21.546095    6700 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 10:19:21.546095    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 10:19:21.546095    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:23.834220    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:23.834220    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:23.834220    6700 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 10:19:23.834342    6700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 10:19:23.834413    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:19:23.836716    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:23.836794    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:23.836869    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:19:26.073006    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:19:26.088033    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:26.088033    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:19:26.544598    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:19:26.544598    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:26.544598    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:19:26.682943    6700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 10:19:28.672371    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:19:28.678957    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:28.678957    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:19:28.810737    6700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 10:19:29.045903    6700 round_trippers.go:463] GET https://172.18.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 10:19:29.045971    6700 round_trippers.go:469] Request Headers:
	I0709 10:19:29.045971    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:19:29.046027    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:19:29.072173    6700 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0709 10:19:29.073170    6700 round_trippers.go:463] PUT https://172.18.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 10:19:29.073170    6700 round_trippers.go:469] Request Headers:
	I0709 10:19:29.073170    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:19:29.073170    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:19:29.073170    6700 round_trippers.go:473]     Content-Type: application/json
	I0709 10:19:29.073765    6700 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0709 10:19:29.081778    6700 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 10:19:29.087882    6700 addons.go:510] duration metric: took 9.8722371s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 10:19:29.087882    6700 start.go:245] waiting for cluster config update ...
	I0709 10:19:29.087882    6700 start.go:254] writing updated cluster config ...
	I0709 10:19:29.093827    6700 out.go:177] 
	I0709 10:19:29.103899    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:19:29.103899    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:19:29.110903    6700 out.go:177] * Starting "ha-400600-m02" control-plane node in "ha-400600" cluster
	I0709 10:19:29.117086    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:19:29.117086    6700 cache.go:56] Caching tarball of preloaded images
	I0709 10:19:29.117641    6700 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 10:19:29.117887    6700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 10:19:29.118190    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:19:29.119510    6700 start.go:360] acquireMachinesLock for ha-400600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 10:19:29.121096    6700 start.go:364] duration metric: took 1.5865ms to acquireMachinesLock for "ha-400600-m02"
	I0709 10:19:29.121277    6700 start.go:93] Provisioning new machine with config: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:19:29.121277    6700 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 10:19:29.123470    6700 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 10:19:29.128869    6700 start.go:159] libmachine.API.Create for "ha-400600" (driver="hyperv")
	I0709 10:19:29.128869    6700 client.go:168] LocalClient.Create starting
	I0709 10:19:29.129128    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 10:19:29.129825    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:19:29.129825    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:19:29.130039    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 10:19:29.130258    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:19:29.130258    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:19:29.130490    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 10:19:31.048681    6700 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 10:19:31.048681    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:31.048681    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 10:19:32.828860    6700 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 10:19:32.828860    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:32.829203    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:19:34.333254    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:19:34.333398    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:34.333398    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:19:37.989649    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:19:37.989917    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:37.994114    6700 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 10:19:38.517380    6700 main.go:141] libmachine: Creating SSH key...
	I0709 10:19:38.712054    6700 main.go:141] libmachine: Creating VM...
	I0709 10:19:38.712054    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:19:41.608194    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:19:41.609105    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:41.609216    6700 main.go:141] libmachine: Using switch "Default Switch"
	I0709 10:19:41.609216    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:19:43.374004    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:19:43.374004    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:43.374004    6700 main.go:141] libmachine: Creating VHD
	I0709 10:19:43.374219    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 10:19:47.211864    6700 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4ECC2E3E-23F6-44BF-8AA9-605DE177D552
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 10:19:47.211864    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:47.211864    6700 main.go:141] libmachine: Writing magic tar header
	I0709 10:19:47.211990    6700 main.go:141] libmachine: Writing SSH key tar header
	I0709 10:19:47.222156    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 10:19:50.456678    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:19:50.456678    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:50.456678    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\disk.vhd' -SizeBytes 20000MB
	I0709 10:19:52.979059    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:19:52.979059    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:52.979940    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-400600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 10:19:56.661078    6700 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-400600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 10:19:56.661078    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:56.661870    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-400600-m02 -DynamicMemoryEnabled $false
	I0709 10:19:58.913011    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:19:58.913011    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:19:58.913011    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-400600-m02 -Count 2
	I0709 10:20:01.123871    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:01.123871    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:01.123871    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-400600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\boot2docker.iso'
	I0709 10:20:03.737775    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:03.737775    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:03.738634    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-400600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\disk.vhd'
	I0709 10:20:06.453581    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:06.453976    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:06.453976    6700 main.go:141] libmachine: Starting VM...
	I0709 10:20:06.453976    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-400600-m02
	I0709 10:20:09.542044    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:09.542044    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:09.542610    6700 main.go:141] libmachine: Waiting for host to start...
	I0709 10:20:09.542610    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:11.856562    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:11.857260    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:11.857260    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:14.430002    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:14.431142    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:15.432860    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:17.684806    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:17.684806    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:17.684806    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:20.300020    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:20.300020    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:21.304371    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:23.609730    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:23.609730    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:23.609730    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:26.183209    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:26.183270    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:27.183572    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:29.418240    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:29.418240    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:29.418240    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:31.966591    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:20:31.966591    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:32.971572    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:35.224209    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:35.224209    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:35.224680    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:37.890347    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:37.891029    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:37.891123    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:40.062862    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:40.063532    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:40.063605    6700 machine.go:94] provisionDockerMachine start ...
	I0709 10:20:40.063605    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:42.232343    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:42.232343    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:42.232436    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:44.786215    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:44.786267    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:44.791325    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:20:44.803070    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:20:44.803070    6700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 10:20:44.931955    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 10:20:44.932066    6700 buildroot.go:166] provisioning hostname "ha-400600-m02"
	I0709 10:20:44.932066    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:47.129679    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:47.130081    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:47.130081    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:49.749823    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:49.750347    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:49.755721    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:20:49.757091    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:20:49.757091    6700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-400600-m02 && echo "ha-400600-m02" | sudo tee /etc/hostname
	I0709 10:20:49.912990    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-400600-m02
	
	I0709 10:20:49.912990    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:52.136717    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:52.136717    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:52.136885    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:54.710906    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:54.710906    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:54.717068    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:20:54.717648    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:20:54.717743    6700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-400600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-400600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-400600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 10:20:54.862361    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:20:54.862361    6700 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 10:20:54.862361    6700 buildroot.go:174] setting up certificates
	I0709 10:20:54.862361    6700 provision.go:84] configureAuth start
	I0709 10:20:54.862361    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:20:57.004888    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:20:57.004888    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:57.004888    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:20:59.574239    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:20:59.574239    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:20:59.575124    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:01.706855    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:01.706855    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:01.706963    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:04.264323    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:04.264323    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:04.264323    6700 provision.go:143] copyHostCerts
	I0709 10:21:04.264615    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 10:21:04.264997    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 10:21:04.264997    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 10:21:04.264997    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 10:21:04.266186    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 10:21:04.266186    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 10:21:04.266186    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 10:21:04.266186    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 10:21:04.266186    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 10:21:04.266186    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 10:21:04.266186    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 10:21:04.266186    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 10:21:04.266186    6700 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-400600-m02 san=[127.0.0.1 172.18.194.29 ha-400600-m02 localhost minikube]
	I0709 10:21:04.924276    6700 provision.go:177] copyRemoteCerts
	I0709 10:21:04.937812    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 10:21:04.937812    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:07.111725    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:07.112064    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:07.112064    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:09.707523    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:09.708076    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:09.708157    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:09.811548    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.873725s)
	I0709 10:21:09.811548    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 10:21:09.812494    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 10:21:09.859280    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 10:21:09.859280    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0709 10:21:09.907184    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 10:21:09.907405    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 10:21:09.953082    6700 provision.go:87] duration metric: took 15.0906879s to configureAuth
	I0709 10:21:09.953082    6700 buildroot.go:189] setting minikube options for container-runtime
	I0709 10:21:09.953690    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:21:09.954274    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:12.117069    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:12.117069    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:12.117069    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:14.698815    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:14.698815    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:14.706424    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:14.706592    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:14.706592    6700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 10:21:14.829911    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 10:21:14.829911    6700 buildroot.go:70] root file system type: tmpfs
	I0709 10:21:14.829911    6700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 10:21:14.829911    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:16.981791    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:16.982243    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:16.982243    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:19.580394    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:19.581453    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:19.587354    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:19.587567    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:19.587567    6700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.204.161"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 10:21:19.738959    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.204.161
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 10:21:19.738959    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:21.889937    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:21.889937    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:21.890512    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:24.487444    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:24.488476    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:24.494624    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:24.495266    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:24.495266    6700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 10:21:26.694360    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 10:21:26.694360    6700 machine.go:97] duration metric: took 46.6306509s to provisionDockerMachine
	I0709 10:21:26.694360    6700 client.go:171] duration metric: took 1m57.5652308s to LocalClient.Create
	I0709 10:21:26.694360    6700 start.go:167] duration metric: took 1m57.5652308s to libmachine.API.Create "ha-400600"
	I0709 10:21:26.694360    6700 start.go:293] postStartSetup for "ha-400600-m02" (driver="hyperv")
	I0709 10:21:26.694360    6700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 10:21:26.706639    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 10:21:26.706639    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:28.871632    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:28.871632    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:28.872484    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:31.433006    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:31.433006    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:31.433891    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:31.550775    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8440932s)
	I0709 10:21:31.563157    6700 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 10:21:31.570427    6700 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 10:21:31.570427    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 10:21:31.570962    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 10:21:31.572099    6700 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 10:21:31.572099    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 10:21:31.585416    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 10:21:31.603788    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 10:21:31.649410    6700 start.go:296] duration metric: took 4.9550386s for postStartSetup
	I0709 10:21:31.652291    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:33.893843    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:33.893843    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:33.893843    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:36.488354    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:36.488354    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:36.488916    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:21:36.491413    6700 start.go:128] duration metric: took 2m7.3698531s to createHost
	I0709 10:21:36.491413    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:38.712894    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:38.712988    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:38.713072    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:41.277455    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:41.277455    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:41.284279    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:41.284848    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:41.284848    6700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 10:21:41.406238    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720545701.413402406
	
	I0709 10:21:41.406238    6700 fix.go:216] guest clock: 1720545701.413402406
	I0709 10:21:41.406300    6700 fix.go:229] Guest: 2024-07-09 10:21:41.413402406 -0700 PDT Remote: 2024-07-09 10:21:36.4914138 -0700 PDT m=+333.833296901 (delta=4.921988606s)
	I0709 10:21:41.406379    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:43.597896    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:43.597896    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:43.597896    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:46.216390    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:46.216390    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:46.223023    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:21:46.223436    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.194.29 22 <nil> <nil>}
	I0709 10:21:46.223436    6700 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720545701
	I0709 10:21:46.367188    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 17:21:41 UTC 2024
	
	I0709 10:21:46.367188    6700 fix.go:236] clock set: Tue Jul  9 17:21:41 UTC 2024
	 (err=<nil>)
	I0709 10:21:46.367188    6700 start.go:83] releasing machines lock for "ha-400600-m02", held for 2m17.2457859s
	I0709 10:21:46.367188    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:48.570250    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:48.570969    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:48.570969    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:51.205915    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:51.205915    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:51.210294    6700 out.go:177] * Found network options:
	I0709 10:21:51.213256    6700 out.go:177]   - NO_PROXY=172.18.204.161
	W0709 10:21:51.216704    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:21:51.219291    6700 out.go:177]   - NO_PROXY=172.18.204.161
	W0709 10:21:51.221600    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:21:51.221967    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:21:51.224998    6700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 10:21:51.224998    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:51.235003    6700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 10:21:51.235003    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m02 ).state
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:21:53.529325    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:53.530257    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:53.530257    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 10:21:56.324171    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:56.324171    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:56.324171    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:56.348330    6700 main.go:141] libmachine: [stdout =====>] : 172.18.194.29
	
	I0709 10:21:56.348330    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:21:56.348330    6700 sshutil.go:53] new ssh client: &{IP:172.18.194.29 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m02\id_rsa Username:docker}
	I0709 10:21:56.414093    6700 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1790781s)
	W0709 10:21:56.414211    6700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 10:21:56.426950    6700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 10:21:56.506247    6700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 10:21:56.506247    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:21:56.506247    6700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2812367s)
	I0709 10:21:56.506417    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:21:56.553989    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 10:21:56.584947    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 10:21:56.605999    6700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 10:21:56.617667    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 10:21:56.647545    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:21:56.677537    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 10:21:56.708864    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:21:56.740788    6700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 10:21:56.773035    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 10:21:56.801875    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 10:21:56.831894    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 10:21:56.863828    6700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 10:21:56.893732    6700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 10:21:56.923280    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:21:57.122823    6700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 10:21:57.168186    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:21:57.180708    6700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 10:21:57.224807    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:21:57.261180    6700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 10:21:57.308495    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:21:57.344493    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:21:57.383282    6700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 10:21:57.447602    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:21:57.472052    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:21:57.516564    6700 ssh_runner.go:195] Run: which cri-dockerd
	I0709 10:21:57.534258    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 10:21:57.551834    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 10:21:57.599321    6700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 10:21:57.813433    6700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 10:21:58.006045    6700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 10:21:58.006045    6700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 10:21:58.066905    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:21:58.258571    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:22:00.839941    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5813642s)
	I0709 10:22:00.852627    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 10:22:00.893278    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:22:00.927214    6700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 10:22:01.128640    6700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 10:22:01.322499    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:22:01.520908    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 10:22:01.565905    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:22:01.602197    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:22:01.803855    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 10:22:01.913637    6700 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 10:22:01.925742    6700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 10:22:01.936387    6700 start.go:562] Will wait 60s for crictl version
	I0709 10:22:01.948665    6700 ssh_runner.go:195] Run: which crictl
	I0709 10:22:01.966862    6700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 10:22:02.028478    6700 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 10:22:02.038922    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:22:02.087983    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:22:02.128717    6700 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 10:22:02.132672    6700 out.go:177]   - env NO_PROXY=172.18.204.161
	I0709 10:22:02.134736    6700 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 10:22:02.138702    6700 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 10:22:02.141674    6700 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 10:22:02.142722    6700 ip.go:210] interface addr: 172.18.192.1/20
	I0709 10:22:02.152660    6700 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 10:22:02.158612    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:22:02.180266    6700 mustload.go:65] Loading cluster: ha-400600
	I0709 10:22:02.181018    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:22:02.182015    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:22:04.362014    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:22:04.362014    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:04.362014    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:22:04.363662    6700 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600 for IP: 172.18.194.29
	I0709 10:22:04.363662    6700 certs.go:194] generating shared ca certs ...
	I0709 10:22:04.363935    6700 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:22:04.364556    6700 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 10:22:04.365161    6700 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 10:22:04.365430    6700 certs.go:256] generating profile certs ...
	I0709 10:22:04.365790    6700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key
	I0709 10:22:04.365790    6700 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077
	I0709 10:22:04.366425    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.204.161 172.18.194.29 172.18.207.254]
	I0709 10:22:04.551536    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077 ...
	I0709 10:22:04.551536    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077: {Name:mk4a51d16faaa4f23e66052e6592db0df7d43bee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:22:04.552956    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077 ...
	I0709 10:22:04.552956    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077: {Name:mkb98654f3a8d12070f23724cefc35befb1c4352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:22:04.554457    6700 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.73210077 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt
	I0709 10:22:04.566153    6700 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.73210077 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key
	I0709 10:22:04.567898    6700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 10:22:04.567898    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 10:22:04.569195    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 10:22:04.569576    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 10:22:04.569886    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 10:22:04.569886    6700 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 10:22:04.569886    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 10:22:04.570893    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 10:22:04.571159    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 10:22:04.571159    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 10:22:04.572167    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 10:22:04.572278    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 10:22:04.572278    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 10:22:04.572278    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:04.573020    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:22:06.790104    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:22:06.790104    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:06.790557    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:22:09.444085    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:22:09.444272    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:09.444333    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:22:09.553785    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0709 10:22:09.562857    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0709 10:22:09.599367    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0709 10:22:09.606882    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0709 10:22:09.640199    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0709 10:22:09.646942    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0709 10:22:09.680015    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0709 10:22:09.688196    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0709 10:22:09.720820    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0709 10:22:09.727704    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0709 10:22:09.762645    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0709 10:22:09.768667    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0709 10:22:09.789341    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 10:22:09.837953    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 10:22:09.887405    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 10:22:09.934516    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 10:22:09.985492    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0709 10:22:10.033068    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 10:22:10.086319    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 10:22:10.136019    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 10:22:10.182147    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 10:22:10.228981    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 10:22:10.276301    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 10:22:10.323257    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0709 10:22:10.357444    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0709 10:22:10.391158    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0709 10:22:10.424803    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0709 10:22:10.457769    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0709 10:22:10.488572    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0709 10:22:10.520226    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0709 10:22:10.566902    6700 ssh_runner.go:195] Run: openssl version
	I0709 10:22:10.590073    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 10:22:10.623913    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 10:22:10.631657    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:22:10.645423    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 10:22:10.669538    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 10:22:10.703208    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 10:22:10.734343    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 10:22:10.741731    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:22:10.753732    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 10:22:10.776554    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 10:22:10.808266    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 10:22:10.841624    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:10.848450    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:10.861449    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:22:10.882644    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 10:22:10.914774    6700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:22:10.921346    6700 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 10:22:10.921346    6700 kubeadm.go:928] updating node {m02 172.18.194.29 8443 v1.30.2 docker true true} ...
	I0709 10:22:10.921962    6700 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-400600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.194.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 10:22:10.921962    6700 kube-vip.go:115] generating kube-vip config ...
	I0709 10:22:10.933625    6700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0709 10:22:10.959172    6700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0709 10:22:10.960659    6700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0709 10:22:10.971620    6700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 10:22:10.987878    6700 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0709 10:22:10.999677    6700 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0709 10:22:11.022406    6700 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet
	I0709 10:22:11.023027    6700 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl
	I0709 10:22:11.023087    6700 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm
	I0709 10:22:12.077078    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:22:12.089302    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:22:12.097821    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0709 10:22:12.098037    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0709 10:22:12.167467    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:22:12.172966    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:22:12.191910    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0709 10:22:12.191910    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0709 10:22:12.475387    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:22:12.558504    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:22:12.584056    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:22:12.601073    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0709 10:22:12.602051    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0709 10:22:13.555164    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0709 10:22:13.575057    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0709 10:22:13.607641    6700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 10:22:13.642222    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0709 10:22:13.689225    6700 ssh_runner.go:195] Run: grep 172.18.207.254	control-plane.minikube.internal$ /etc/hosts
	I0709 10:22:13.695919    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:22:13.731217    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:22:13.950355    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:22:13.982022    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:22:13.983077    6700 start.go:316] joinCluster: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:22:13.983326    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0709 10:22:13.983403    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:22:16.195346    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:22:16.195346    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:16.195346    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:22:18.827529    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:22:18.827529    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:22:18.828078    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:22:19.037769    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0543976s)
	I0709 10:22:19.037882    6700 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:22:19.037882    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hidln1.r2nzqumybz2oot2d --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m02 --control-plane --apiserver-advertise-address=172.18.194.29 --apiserver-bind-port=8443"
	I0709 10:23:05.800803    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hidln1.r2nzqumybz2oot2d --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m02 --control-plane --apiserver-advertise-address=172.18.194.29 --apiserver-bind-port=8443": (46.7627236s)
	I0709 10:23:05.800961    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0709 10:23:06.621841    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-400600-m02 minikube.k8s.io/updated_at=2024_07_09T10_23_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=ha-400600 minikube.k8s.io/primary=false
	I0709 10:23:06.802867    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-400600-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0709 10:23:06.949058    6700 start.go:318] duration metric: took 52.9658596s to joinCluster
	I0709 10:23:06.949215    6700 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:23:06.949773    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:23:06.951938    6700 out.go:177] * Verifying Kubernetes components...
	I0709 10:23:06.967361    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:23:07.305940    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:23:07.338767    6700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:23:07.339647    6700 kapi.go:59] client config for ha-400600: &rest.Config{Host:"https://172.18.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0709 10:23:07.339871    6700 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.207.254:8443 with https://172.18.204.161:8443
	I0709 10:23:07.340984    6700 node_ready.go:35] waiting up to 6m0s for node "ha-400600-m02" to be "Ready" ...
	I0709 10:23:07.341187    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:07.341187    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:07.341187    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:07.341247    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:07.377627    6700 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0709 10:23:07.842079    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:07.842410    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:07.842410    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:07.842410    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:07.889542    6700 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0709 10:23:08.349595    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:08.349595    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:08.349595    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:08.349595    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:08.356332    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:08.856185    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:08.856185    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:08.856185    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:08.856185    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:08.872490    6700 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 10:23:09.342634    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:09.342634    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:09.342634    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:09.342634    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:09.352365    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:23:09.354049    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:09.849205    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:09.849434    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:09.849434    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:09.849560    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:09.855905    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:10.354979    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:10.355194    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:10.355194    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:10.355194    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:10.362058    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:10.848162    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:10.848162    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:10.848162    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:10.848162    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:10.858733    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:23:11.353740    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:11.353740    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:11.353740    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:11.353740    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:11.357194    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:11.358553    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:11.844023    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:11.844314    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:11.844314    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:11.844314    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:11.852650    6700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 10:23:12.355129    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:12.355129    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:12.355192    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:12.355211    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:12.360126    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:12.854064    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:12.854157    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:12.854157    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:12.854157    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:12.860757    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:13.352389    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:13.352389    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:13.352389    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:13.352389    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:13.358956    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:13.359834    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:13.847049    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:13.847049    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:13.847049    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:13.847049    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:13.850636    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:14.355976    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:14.355976    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:14.355976    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:14.355976    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:14.361341    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:14.843112    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:14.843112    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:14.843522    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:14.843522    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:14.848926    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:15.346436    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:15.346436    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:15.346436    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:15.346436    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:15.351012    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:15.848171    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:15.848171    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:15.848171    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:15.848171    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:15.853410    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:15.854875    6700 node_ready.go:53] node "ha-400600-m02" has status "Ready":"False"
	I0709 10:23:16.345436    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:16.345436    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:16.345436    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:16.345436    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:16.352975    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:16.846429    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:16.846669    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:16.846669    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:16.846669    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:16.851042    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.351331    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:17.351538    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.351538    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.351538    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.356727    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:17.358134    6700 node_ready.go:49] node "ha-400600-m02" has status "Ready":"True"
	I0709 10:23:17.358134    6700 node_ready.go:38] duration metric: took 10.0170375s for node "ha-400600-m02" to be "Ready" ...
	I0709 10:23:17.358134    6700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:23:17.358134    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:17.358134    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.358134    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.358134    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.368322    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:23:17.377659    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.377659    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zbxnq
	I0709 10:23:17.377659    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.377659    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.377659    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.381265    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:17.382578    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:17.382578    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.382702    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.382702    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.387438    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.388677    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:17.388741    6700 pod_ready.go:81] duration metric: took 11.0815ms for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.388741    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.388805    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zst2x
	I0709 10:23:17.388876    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.388876    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.388876    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.392891    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.393903    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:17.393992    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.393992    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.393992    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.396951    6700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:23:17.398566    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:17.398566    6700 pod_ready.go:81] duration metric: took 9.8248ms for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.398566    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.398755    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600
	I0709 10:23:17.398755    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.398755    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.398755    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.402069    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:17.403190    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:17.403190    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.403190    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.403190    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.407867    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.409038    6700 pod_ready.go:92] pod "etcd-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:17.409038    6700 pod_ready.go:81] duration metric: took 10.4724ms for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.409038    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:17.409184    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:17.409184    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.409184    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.409291    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.413351    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.414206    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:17.414206    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.414206    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.414206    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.419020    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:17.919304    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:17.919304    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.919304    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.919304    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.925841    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:17.927630    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:17.927630    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:17.927630    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:17.927727    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:17.935434    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:18.423205    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:18.423268    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.423268    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.423268    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.431136    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:18.432688    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:18.432784    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.432784    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.432784    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.436590    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:18.910863    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:18.910962    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.910962    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.910962    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.915099    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:18.916714    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:18.916744    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:18.916939    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:18.916980    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:18.921097    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:19.425016    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:19.425016    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.425016    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.425016    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.430790    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:19.431701    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:19.431701    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.431796    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.431796    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.436835    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:19.437559    6700 pod_ready.go:102] pod "etcd-ha-400600-m02" in "kube-system" namespace has status "Ready":"False"
	I0709 10:23:19.912987    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:19.912987    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.912987    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.913390    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.919716    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:23:19.920766    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:19.920856    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:19.920856    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:19.920856    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:19.925120    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:20.411212    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:20.411212    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.411212    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.411212    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.414530    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:20.416025    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:20.416025    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.416025    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.416025    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.420684    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:20.914024    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:20.914024    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.914024    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.914024    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.919463    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:20.921045    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:20.921045    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:20.921045    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:20.921045    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:20.944236    6700 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0709 10:23:21.415444    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:23:21.415534    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.415534    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.415534    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.419901    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.421583    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:21.421644    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.421644    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.421644    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.424902    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:23:21.425894    6700 pod_ready.go:92] pod "etcd-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.425894    6700 pod_ready.go:81] duration metric: took 4.0168465s for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.425894    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.425894    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600
	I0709 10:23:21.425894    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.425894    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.425894    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.430644    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.431406    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:21.431406    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.431406    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.431406    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.460867    6700 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0709 10:23:21.461601    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.461658    6700 pod_ready.go:81] duration metric: took 35.7068ms for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.461658    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.461894    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m02
	I0709 10:23:21.461894    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.461894    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.461894    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.470987    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:23:21.472429    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:21.472429    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.472542    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.472542    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.476994    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.477737    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.477737    6700 pod_ready.go:81] duration metric: took 16.0797ms for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.477737    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.555755    6700 request.go:629] Waited for 77.7485ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:23:21.555860    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:23:21.555860    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.555976    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.555976    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.560689    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:21.759551    6700 request.go:629] Waited for 196.9091ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:21.759666    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:21.759666    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.759666    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.759666    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.765123    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:21.766468    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:21.766533    6700 pod_ready.go:81] duration metric: took 288.7185ms for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.766533    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:21.962277    6700 request.go:629] Waited for 195.2609ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:23:21.962497    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:23:21.962497    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:21.962561    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:21.962561    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:21.968323    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:22.165651    6700 request.go:629] Waited for 195.7957ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.165651    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.165651    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.165651    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.165651    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.171320    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:22.171723    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:22.172636    6700 pod_ready.go:81] duration metric: took 406.1022ms for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.172705    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.352412    6700 request.go:629] Waited for 179.4472ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:23:22.352787    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:23:22.352787    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.352787    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.352787    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.358622    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:22.555850    6700 request.go:629] Waited for 195.6488ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:22.555850    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:22.555850    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.555850    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.555850    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.561655    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:22.562527    6700 pod_ready.go:92] pod "kube-proxy-7k7w8" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:22.562582    6700 pod_ready.go:81] duration metric: took 389.8759ms for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.562582    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.758231    6700 request.go:629] Waited for 195.4993ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:23:22.758472    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:23:22.758472    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.758548    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.758548    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.766564    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:22.960713    6700 request.go:629] Waited for 193.6089ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.961027    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:22.961027    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:22.961027    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:22.961027    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:22.965557    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:22.967338    6700 pod_ready.go:92] pod "kube-proxy-djlzm" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:22.967338    6700 pod_ready.go:81] duration metric: took 404.7546ms for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:22.967427    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.167134    6700 request.go:629] Waited for 199.6376ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:23:23.167134    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:23:23.167134    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.167134    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.167134    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.171406    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:23.356491    6700 request.go:629] Waited for 183.3604ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:23.356740    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:23:23.356792    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.356792    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.356792    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.361308    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:23:23.362948    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:23.363024    6700 pod_ready.go:81] duration metric: took 395.5961ms for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.363024    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.559171    6700 request.go:629] Waited for 195.9164ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:23:23.559364    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:23:23.559466    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.559466    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.559466    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.564861    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:23.765887    6700 request.go:629] Waited for 199.6772ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:23.765887    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:23:23.765887    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.765887    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.765887    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.771580    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:23.772886    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:23:23.773053    6700 pod_ready.go:81] duration metric: took 410.0278ms for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:23:23.773053    6700 pod_ready.go:38] duration metric: took 6.414904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:23:23.773179    6700 api_server.go:52] waiting for apiserver process to appear ...
	I0709 10:23:23.785039    6700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:23:23.814980    6700 api_server.go:72] duration metric: took 16.8655931s to wait for apiserver process to appear ...
	I0709 10:23:23.815059    6700 api_server.go:88] waiting for apiserver healthz status ...
	I0709 10:23:23.815059    6700 api_server.go:253] Checking apiserver healthz at https://172.18.204.161:8443/healthz ...
	I0709 10:23:23.822770    6700 api_server.go:279] https://172.18.204.161:8443/healthz returned 200:
	ok
	I0709 10:23:23.823200    6700 round_trippers.go:463] GET https://172.18.204.161:8443/version
	I0709 10:23:23.823261    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.823355    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.823386    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.824546    6700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:23:23.825444    6700 api_server.go:141] control plane version: v1.30.2
	I0709 10:23:23.825584    6700 api_server.go:131] duration metric: took 10.525ms to wait for apiserver health ...
	I0709 10:23:23.825662    6700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 10:23:23.953771    6700 request.go:629] Waited for 127.9182ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:23.953877    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:23.953877    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:23.953877    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:23.953877    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:23.961114    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:23.968078    6700 system_pods.go:59] 17 kube-system pods found
	I0709 10:23:23.968078    6700 system_pods.go:61] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:23:23.968078    6700 system_pods.go:61] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:23:23.968078    6700 system_pods.go:74] duration metric: took 142.4164ms to wait for pod list to return data ...
	I0709 10:23:23.968078    6700 default_sa.go:34] waiting for default service account to be created ...
	I0709 10:23:24.156616    6700 request.go:629] Waited for 187.7152ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:23:24.156616    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:23:24.156616    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:24.156616    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:24.156616    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:24.161662    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:24.162756    6700 default_sa.go:45] found service account: "default"
	I0709 10:23:24.162835    6700 default_sa.go:55] duration metric: took 194.756ms for default service account to be created ...
	I0709 10:23:24.162835    6700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 10:23:24.358541    6700 request.go:629] Waited for 195.4367ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:24.358541    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:23:24.358763    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:24.358763    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:24.358763    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:24.366508    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:23:24.374700    6700 system_pods.go:86] 17 kube-system pods found
	I0709 10:23:24.374700    6700 system_pods.go:89] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:23:24.374700    6700 system_pods.go:89] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:23:24.374700    6700 system_pods.go:126] duration metric: took 211.8649ms to wait for k8s-apps to be running ...
	I0709 10:23:24.374700    6700 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 10:23:24.392548    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:23:24.427525    6700 system_svc.go:56] duration metric: took 52.8246ms WaitForService to wait for kubelet
	I0709 10:23:24.427525    6700 kubeadm.go:576] duration metric: took 17.4781365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 10:23:24.427525    6700 node_conditions.go:102] verifying NodePressure condition ...
	I0709 10:23:24.563935    6700 request.go:629] Waited for 135.4025ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes
	I0709 10:23:24.564226    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes
	I0709 10:23:24.564368    6700 round_trippers.go:469] Request Headers:
	I0709 10:23:24.564390    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:23:24.564390    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:23:24.570220    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:23:24.571256    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:23:24.571256    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:23:24.571256    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:23:24.571256    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:23:24.571256    6700 node_conditions.go:105] duration metric: took 142.7233ms to run NodePressure ...
	I0709 10:23:24.571256    6700 start.go:240] waiting for startup goroutines ...
	I0709 10:23:24.571256    6700 start.go:254] writing updated cluster config ...
	I0709 10:23:24.575293    6700 out.go:177] 
	I0709 10:23:24.589380    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:23:24.589973    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:23:24.597327    6700 out.go:177] * Starting "ha-400600-m03" control-plane node in "ha-400600" cluster
	I0709 10:23:24.599694    6700 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 10:23:24.599694    6700 cache.go:56] Caching tarball of preloaded images
	I0709 10:23:24.599694    6700 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 10:23:24.600239    6700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 10:23:24.600499    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:23:24.604823    6700 start.go:360] acquireMachinesLock for ha-400600-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 10:23:24.604823    6700 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-400600-m03"
	I0709 10:23:24.605203    6700 start.go:93] Provisioning new machine with config: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:23:24.605203    6700 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0709 10:23:24.607715    6700 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 10:23:24.608882    6700 start.go:159] libmachine.API.Create for "ha-400600" (driver="hyperv")
	I0709 10:23:24.608984    6700 client.go:168] LocalClient.Create starting
	I0709 10:23:24.609347    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Decoding PEM data...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: Parsing certificate...
	I0709 10:23:24.609868    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 10:23:26.511464    6700 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 10:23:26.511464    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:26.511464    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 10:23:28.245149    6700 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 10:23:28.245185    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:28.245185    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:23:29.757893    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:23:29.758721    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:29.758721    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:23:33.542887    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:23:33.542887    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:33.544929    6700 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 10:23:33.985493    6700 main.go:141] libmachine: Creating SSH key...
	I0709 10:23:34.350447    6700 main.go:141] libmachine: Creating VM...
	I0709 10:23:34.350447    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 10:23:37.267930    6700 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 10:23:37.267930    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:37.268122    6700 main.go:141] libmachine: Using switch "Default Switch"
	I0709 10:23:37.268122    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 10:23:39.039137    6700 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 10:23:39.040055    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:39.040055    6700 main.go:141] libmachine: Creating VHD
	I0709 10:23:39.040055    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 10:23:42.846665    6700 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1CAB5C5E-5591-4B25-98CE-5DC8F79B9BFC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 10:23:42.847732    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:42.847732    6700 main.go:141] libmachine: Writing magic tar header
	I0709 10:23:42.847732    6700 main.go:141] libmachine: Writing SSH key tar header
	I0709 10:23:42.856565    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 10:23:46.094711    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:46.094711    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:46.095337    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\disk.vhd' -SizeBytes 20000MB
	I0709 10:23:48.678788    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:48.678788    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:48.679461    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-400600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 10:23:52.397685    6700 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-400600-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 10:23:52.397685    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:52.397809    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-400600-m03 -DynamicMemoryEnabled $false
	I0709 10:23:54.686328    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:54.686530    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:54.686621    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-400600-m03 -Count 2
	I0709 10:23:56.903816    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:56.903816    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:56.904387    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-400600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\boot2docker.iso'
	I0709 10:23:59.497636    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:23:59.497636    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:23:59.497636    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-400600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\disk.vhd'
	I0709 10:24:02.203712    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:02.203712    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:02.203712    6700 main.go:141] libmachine: Starting VM...
	I0709 10:24:02.203712    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-400600-m03
	I0709 10:24:05.372618    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:05.372618    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:05.372618    6700 main.go:141] libmachine: Waiting for host to start...
	I0709 10:24:05.372744    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:07.762776    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:07.762854    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:07.762854    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:10.369792    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:10.369792    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:11.375501    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:13.682719    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:13.682785    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:13.682893    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:16.341977    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:16.341977    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:17.348138    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:19.625776    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:19.626793    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:19.626865    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:22.298277    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:22.298277    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:23.305923    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:25.564250    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:25.564320    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:25.564400    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:28.177445    6700 main.go:141] libmachine: [stdout =====>] : 
	I0709 10:24:28.178396    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:29.182821    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:31.490782    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:31.490782    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:31.490782    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:34.190292    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:34.191293    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:34.191420    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:36.418524    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:36.418524    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:36.418524    6700 machine.go:94] provisionDockerMachine start ...
	I0709 10:24:36.419247    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:38.649258    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:38.649908    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:38.650009    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:41.243798    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:41.243798    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:41.249898    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:24:41.250071    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:24:41.250071    6700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 10:24:41.376640    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 10:24:41.376640    6700 buildroot.go:166] provisioning hostname "ha-400600-m03"
	I0709 10:24:41.376801    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:43.566727    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:43.566727    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:43.566727    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:46.193308    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:46.193600    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:46.199791    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:24:46.200395    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:24:46.200395    6700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-400600-m03 && echo "ha-400600-m03" | sudo tee /etc/hostname
	I0709 10:24:46.348419    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-400600-m03
	
	I0709 10:24:46.348822    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:48.523526    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:48.523526    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:48.524170    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:51.156890    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:51.156890    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:51.165336    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:24:51.166460    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:24:51.166460    6700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-400600-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-400600-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-400600-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 10:24:51.316444    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 10:24:51.316562    6700 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 10:24:51.316562    6700 buildroot.go:174] setting up certificates
	I0709 10:24:51.316648    6700 provision.go:84] configureAuth start
	I0709 10:24:51.316648    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:53.577201    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:53.577201    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:53.577364    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:24:56.243194    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:24:56.243574    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:56.243639    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:24:58.470485    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:24:58.470717    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:24:58.470717    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:01.111864    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:01.112601    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:01.112601    6700 provision.go:143] copyHostCerts
	I0709 10:25:01.112768    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 10:25:01.113077    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 10:25:01.113077    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 10:25:01.113145    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 10:25:01.114893    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 10:25:01.114893    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 10:25:01.114893    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 10:25:01.115437    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 10:25:01.116798    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 10:25:01.116798    6700 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 10:25:01.116798    6700 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 10:25:01.117466    6700 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 10:25:01.118570    6700 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-400600-m03 san=[127.0.0.1 172.18.201.166 ha-400600-m03 localhost minikube]
	I0709 10:25:01.299673    6700 provision.go:177] copyRemoteCerts
	I0709 10:25:01.314182    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 10:25:01.314182    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:03.506447    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:03.506447    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:03.506711    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:06.149043    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:06.149043    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:06.149926    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:06.256655    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9424617s)
	I0709 10:25:06.256775    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 10:25:06.257210    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0709 10:25:06.306320    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 10:25:06.306844    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 10:25:06.354008    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 10:25:06.354466    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 10:25:06.401973    6700 provision.go:87] duration metric: took 15.0852891s to configureAuth
	I0709 10:25:06.401973    6700 buildroot.go:189] setting minikube options for container-runtime
	I0709 10:25:06.403025    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:25:06.403114    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:08.590869    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:08.590869    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:08.590957    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:11.223812    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:11.223812    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:11.229892    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:11.230110    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:11.230110    6700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 10:25:11.347803    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 10:25:11.347897    6700 buildroot.go:70] root file system type: tmpfs
	I0709 10:25:11.348117    6700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 10:25:11.348198    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:13.539025    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:13.539025    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:13.539558    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:16.104022    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:16.104022    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:16.109866    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:16.110561    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:16.110561    6700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.204.161"
	Environment="NO_PROXY=172.18.204.161,172.18.194.29"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 10:25:16.267369    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.204.161
	Environment=NO_PROXY=172.18.204.161,172.18.194.29
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 10:25:16.267998    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:18.447678    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:18.447678    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:18.448281    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:21.081667    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:21.081667    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:21.086965    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:21.087746    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:21.087746    6700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 10:25:23.370298    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 10:25:23.370298    6700 machine.go:97] duration metric: took 46.9510639s to provisionDockerMachine
	I0709 10:25:23.370298    6700 client.go:171] duration metric: took 1m58.7610291s to LocalClient.Create
	I0709 10:25:23.370298    6700 start.go:167] duration metric: took 1m58.7611311s to libmachine.API.Create "ha-400600"
	I0709 10:25:23.370298    6700 start.go:293] postStartSetup for "ha-400600-m03" (driver="hyperv")
	I0709 10:25:23.370298    6700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 10:25:23.381348    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 10:25:23.382305    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:25.595109    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:25.595231    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:25.595363    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:28.233048    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:28.233936    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:28.233936    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:28.333550    6700 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9512333s)
	I0709 10:25:28.347276    6700 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 10:25:28.355192    6700 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 10:25:28.355306    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 10:25:28.355708    6700 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 10:25:28.356572    6700 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 10:25:28.358314    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 10:25:28.371350    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 10:25:28.394362    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 10:25:28.449449    6700 start.go:296] duration metric: took 5.0791393s for postStartSetup
	I0709 10:25:28.452230    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:30.637398    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:30.637581    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:30.637833    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:33.241536    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:33.242537    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:33.242537    6700 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\config.json ...
	I0709 10:25:33.244976    6700 start.go:128] duration metric: took 2m8.6394644s to createHost
	I0709 10:25:33.244976    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:35.442543    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:35.443317    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:35.443407    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:38.085383    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:38.085518    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:38.092100    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:38.092205    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:38.092205    6700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 10:25:38.217553    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720545938.208908593
	
	I0709 10:25:38.218172    6700 fix.go:216] guest clock: 1720545938.208908593
	I0709 10:25:38.218172    6700 fix.go:229] Guest: 2024-07-09 10:25:38.208908593 -0700 PDT Remote: 2024-07-09 10:25:33.2449769 -0700 PDT m=+570.586302101 (delta=4.963931693s)
	I0709 10:25:38.218240    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:40.500679    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:40.501453    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:40.501453    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:43.193090    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:43.193090    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:43.201427    6700 main.go:141] libmachine: Using SSH client type: native
	I0709 10:25:43.202340    6700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.201.166 22 <nil> <nil>}
	I0709 10:25:43.202340    6700 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720545938
	I0709 10:25:43.346308    6700 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 17:25:38 UTC 2024
	
	I0709 10:25:43.346380    6700 fix.go:236] clock set: Tue Jul  9 17:25:38 UTC 2024
	 (err=<nil>)
	I0709 10:25:43.346380    6700 start.go:83] releasing machines lock for "ha-400600-m03", held for 2m18.7412242s
	I0709 10:25:43.346649    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:45.586826    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:45.587346    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:45.587486    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:48.209545    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:48.209545    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:48.212417    6700 out.go:177] * Found network options:
	I0709 10:25:48.214998    6700 out.go:177]   - NO_PROXY=172.18.204.161,172.18.194.29
	W0709 10:25:48.217163    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.217163    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:25:48.219334    6700 out.go:177]   - NO_PROXY=172.18.204.161,172.18.194.29
	W0709 10:25:48.221005    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.221005    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.223052    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 10:25:48.223052    6700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 10:25:48.225294    6700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 10:25:48.225882    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:48.239962    6700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 10:25:48.239962    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600-m03 ).state
	I0709 10:25:50.567215    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:50.567215    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:50.567786    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:50.570775    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:25:50.570857    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:50.571001    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 10:25:53.288591    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:53.288591    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:53.288591    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:53.320991    6700 main.go:141] libmachine: [stdout =====>] : 172.18.201.166
	
	I0709 10:25:53.321177    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:25:53.321494    6700 sshutil.go:53] new ssh client: &{IP:172.18.201.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600-m03\id_rsa Username:docker}
	I0709 10:25:53.437066    6700 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.211759s)
	I0709 10:25:53.437066    6700 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1970911s)
	W0709 10:25:53.437571    6700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 10:25:53.451941    6700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 10:25:53.481318    6700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 10:25:53.481434    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:25:53.481655    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:25:53.529203    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 10:25:53.565612    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 10:25:53.585057    6700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 10:25:53.597313    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 10:25:53.629796    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:25:53.660689    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 10:25:53.692784    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 10:25:53.726475    6700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 10:25:53.758932    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 10:25:53.793671    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 10:25:53.827556    6700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 10:25:53.859317    6700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 10:25:53.891874    6700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 10:25:53.924811    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:54.141022    6700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 10:25:54.176808    6700 start.go:494] detecting cgroup driver to use...
	I0709 10:25:54.193627    6700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 10:25:54.229841    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:25:54.265606    6700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 10:25:54.308905    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 10:25:54.352191    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:25:54.389134    6700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 10:25:54.452587    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 10:25:54.478291    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 10:25:54.527807    6700 ssh_runner.go:195] Run: which cri-dockerd
	I0709 10:25:54.546760    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 10:25:54.565787    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 10:25:54.614962    6700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 10:25:54.810290    6700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 10:25:54.997721    6700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 10:25:54.997840    6700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 10:25:55.043731    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:55.253583    6700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 10:25:57.862610    6700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6090207s)
	I0709 10:25:57.874789    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 10:25:57.912451    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:25:57.951153    6700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 10:25:58.161761    6700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 10:25:58.371132    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:58.576135    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 10:25:58.617942    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 10:25:58.653973    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:25:58.877249    6700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 10:25:58.985980    6700 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 10:25:58.999388    6700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 10:25:59.008478    6700 start.go:562] Will wait 60s for crictl version
	I0709 10:25:59.020694    6700 ssh_runner.go:195] Run: which crictl
	I0709 10:25:59.039259    6700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 10:25:59.097519    6700 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 10:25:59.107756    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:25:59.152282    6700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 10:25:59.191299    6700 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 10:25:59.193807    6700 out.go:177]   - env NO_PROXY=172.18.204.161
	I0709 10:25:59.196660    6700 out.go:177]   - env NO_PROXY=172.18.204.161,172.18.194.29
	I0709 10:25:59.199651    6700 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 10:25:59.203589    6700 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 10:25:59.206504    6700 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 10:25:59.206504    6700 ip.go:210] interface addr: 172.18.192.1/20
	I0709 10:25:59.216500    6700 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 10:25:59.224070    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:25:59.248382    6700 mustload.go:65] Loading cluster: ha-400600
	I0709 10:25:59.249047    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:25:59.249267    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:26:01.405277    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:26:01.405277    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:01.405381    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:26:01.406194    6700 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600 for IP: 172.18.201.166
	I0709 10:26:01.406265    6700 certs.go:194] generating shared ca certs ...
	I0709 10:26:01.406265    6700 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:26:01.406969    6700 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 10:26:01.407361    6700 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 10:26:01.407599    6700 certs.go:256] generating profile certs ...
	I0709 10:26:01.407778    6700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\client.key
	I0709 10:26:01.408344    6700 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea
	I0709 10:26:01.408561    6700 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.204.161 172.18.194.29 172.18.201.166 172.18.207.254]
	I0709 10:26:01.571022    6700 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea ...
	I0709 10:26:01.571022    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea: {Name:mk44a6f67565d8d3f66ae0e785452857941e5f1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:26:01.572320    6700 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea ...
	I0709 10:26:01.573367    6700 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea: {Name:mk59de0b86a8a2193f4a1b38ab929a444a6dae7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 10:26:01.574104    6700 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt.44fcd7ea -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt
	I0709 10:26:01.586050    6700 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key.44fcd7ea -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key
	I0709 10:26:01.587528    6700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key
	I0709 10:26:01.587528    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 10:26:01.587528    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 10:26:01.588065    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 10:26:01.588137    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 10:26:01.588137    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 10:26:01.588137    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 10:26:01.588886    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 10:26:01.589345    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 10:26:01.589525    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 10:26:01.589525    6700 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 10:26:01.590073    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 10:26:01.590280    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 10:26:01.590280    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 10:26:01.590830    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 10:26:01.591204    6700 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 10:26:01.591204    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 10:26:01.591204    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 10:26:01.591858    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:01.591897    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:26:03.805009    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:26:03.805701    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:03.805701    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:26:06.453552    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:26:06.454090    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:06.454284    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:26:06.556930    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0709 10:26:06.565595    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0709 10:26:06.603305    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0709 10:26:06.612246    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0709 10:26:06.655647    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0709 10:26:06.662776    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0709 10:26:06.697065    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0709 10:26:06.704268    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0709 10:26:06.739800    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0709 10:26:06.747470    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0709 10:26:06.783155    6700 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0709 10:26:06.791028    6700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0709 10:26:06.814725    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 10:26:06.864353    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 10:26:06.914012    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 10:26:06.961286    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 10:26:07.013047    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0709 10:26:07.070964    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0709 10:26:07.120270    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 10:26:07.175260    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-400600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 10:26:07.223782    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 10:26:07.272007    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 10:26:07.320578    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 10:26:07.367792    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0709 10:26:07.400738    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0709 10:26:07.444517    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0709 10:26:07.477233    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0709 10:26:07.511570    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0709 10:26:07.543749    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0709 10:26:07.576422    6700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0709 10:26:07.624031    6700 ssh_runner.go:195] Run: openssl version
	I0709 10:26:07.646849    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 10:26:07.683020    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 10:26:07.691357    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 10:26:07.704632    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 10:26:07.726067    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 10:26:07.759669    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 10:26:07.792584    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 10:26:07.799786    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 10:26:07.812372    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 10:26:07.837905    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 10:26:07.870777    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 10:26:07.902540    6700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:07.910123    6700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:07.929552    6700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 10:26:07.951160    6700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 10:26:07.985870    6700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 10:26:07.992800    6700 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 10:26:07.992800    6700 kubeadm.go:928] updating node {m03 172.18.201.166 8443 v1.30.2 docker true true} ...
	I0709 10:26:07.992800    6700 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-400600-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.201.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 10:26:07.993340    6700 kube-vip.go:115] generating kube-vip config ...
	I0709 10:26:08.006882    6700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0709 10:26:08.036270    6700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0709 10:26:08.036755    6700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0709 10:26:08.049231    6700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 10:26:08.075721    6700 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0709 10:26:08.089122    6700 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0709 10:26:08.108360    6700 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0709 10:26:08.108360    6700 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0709 10:26:08.108360    6700 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0709 10:26:08.108656    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:26:08.108656    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:26:08.121319    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:26:08.123100    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0709 10:26:08.123100    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0709 10:26:08.144440    6700 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:26:08.144440    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0709 10:26:08.144440    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0709 10:26:08.144440    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0709 10:26:08.144440    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0709 10:26:08.157454    6700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0709 10:26:08.202678    6700 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0709 10:26:08.202678    6700 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0709 10:26:09.492901    6700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0709 10:26:09.511062    6700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0709 10:26:09.543208    6700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 10:26:09.575792    6700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0709 10:26:09.618986    6700 ssh_runner.go:195] Run: grep 172.18.207.254	control-plane.minikube.internal$ /etc/hosts
	I0709 10:26:09.625261    6700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 10:26:09.664072    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:26:09.871371    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:26:09.908283    6700 host.go:66] Checking if "ha-400600" exists ...
	I0709 10:26:09.909323    6700 start.go:316] joinCluster: &{Name:ha-400600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-400600 Namespace:default APIServerHAVIP:172.18.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.161 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.194.29 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.18.201.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 10:26:09.909524    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0709 10:26:09.909524    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-400600 ).state
	I0709 10:26:12.144942    6700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 10:26:12.144942    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:12.145494    6700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-400600 ).networkadapters[0]).ipaddresses[0]
	I0709 10:26:14.769879    6700 main.go:141] libmachine: [stdout =====>] : 172.18.204.161
	
	I0709 10:26:14.769879    6700 main.go:141] libmachine: [stderr =====>] : 
	I0709 10:26:14.769879    6700 sshutil.go:53] new ssh client: &{IP:172.18.204.161 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-400600\id_rsa Username:docker}
	I0709 10:26:14.984087    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.074551s)
	I0709 10:26:14.984087    6700 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.18.201.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:26:14.984087    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37dudu.o9hs9ibo2r1ddpqu --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m03 --control-plane --apiserver-advertise-address=172.18.201.166 --apiserver-bind-port=8443"
	I0709 10:27:04.131140    6700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37dudu.o9hs9ibo2r1ddpqu --discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-400600-m03 --control-plane --apiserver-advertise-address=172.18.201.166 --apiserver-bind-port=8443": (49.1468144s)
	I0709 10:27:04.131211    6700 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0709 10:27:04.887742    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-400600-m03 minikube.k8s.io/updated_at=2024_07_09T10_27_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=ha-400600 minikube.k8s.io/primary=false
	I0709 10:27:05.086368    6700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-400600-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0709 10:27:05.297861    6700 start.go:318] duration metric: took 55.3884049s to joinCluster
	I0709 10:27:05.297861    6700 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.18.201.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 10:27:05.298861    6700 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:27:05.301869    6700 out.go:177] * Verifying Kubernetes components...
	I0709 10:27:05.319867    6700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 10:27:05.782536    6700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 10:27:05.830797    6700 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:27:05.831820    6700 kapi.go:59] client config for ha-400600: &rest.Config{Host:"https://172.18.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-400600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0709 10:27:05.831949    6700 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.207.254:8443 with https://172.18.204.161:8443
	I0709 10:27:05.832913    6700 node_ready.go:35] waiting up to 6m0s for node "ha-400600-m03" to be "Ready" ...
	I0709 10:27:05.833207    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:05.833207    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:05.833207    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:05.833207    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:05.847965    6700 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0709 10:27:06.347022    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:06.347022    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:06.347022    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:06.347022    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:06.353148    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:06.838909    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:06.838909    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:06.838909    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:06.838909    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:06.849002    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:07.346403    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:07.346403    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:07.346403    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:07.346403    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:07.353011    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:07.836108    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:07.836108    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:07.836197    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:07.836197    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:07.840780    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:07.842407    6700 node_ready.go:53] node "ha-400600-m03" has status "Ready":"False"
	I0709 10:27:08.343254    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:08.343254    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:08.343254    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:08.343254    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:08.353803    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:08.836429    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:08.836429    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:08.836429    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:08.836429    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:08.843669    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:09.340224    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:09.340461    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:09.340461    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:09.340461    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:09.344300    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:09.842559    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:09.842629    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:09.842629    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:09.842629    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:09.847302    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:09.848393    6700 node_ready.go:53] node "ha-400600-m03" has status "Ready":"False"
	I0709 10:27:10.344426    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:10.344426    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:10.344426    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:10.344426    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:10.348931    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:10.835462    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:10.835523    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:10.835523    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:10.835523    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:10.839944    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:11.341841    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:11.341841    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:11.342065    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:11.342065    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:11.347373    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:11.846325    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:11.846395    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:11.846395    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:11.846395    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:11.851280    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:11.852190    6700 node_ready.go:53] node "ha-400600-m03" has status "Ready":"False"
	I0709 10:27:12.340241    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:12.340241    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:12.340241    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:12.340241    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:12.345498    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:12.834669    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:12.834669    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:12.834669    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:12.834669    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:12.847267    6700 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0709 10:27:13.340311    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:13.340424    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:13.340424    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:13.340424    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:13.345900    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:13.842898    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:13.842898    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:13.842898    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:13.842898    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:13.846196    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.333861    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:14.333861    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.333861    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.333958    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.339937    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:14.340716    6700 node_ready.go:49] node "ha-400600-m03" has status "Ready":"True"
	I0709 10:27:14.340716    6700 node_ready.go:38] duration metric: took 8.5077823s for node "ha-400600-m03" to be "Ready" ...
	I0709 10:27:14.340716    6700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:27:14.340716    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:14.340716    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.340716    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.340716    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.350422    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:27:14.359360    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.359891    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zbxnq
	I0709 10:27:14.359891    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.359891    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.360083    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.363232    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.364538    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:14.364538    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.364538    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.364638    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.367386    6700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 10:27:14.368732    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.368732    6700 pod_ready.go:81] duration metric: took 8.8409ms for pod "coredns-7db6d8ff4d-zbxnq" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.368732    6700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.368861    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zst2x
	I0709 10:27:14.368861    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.368861    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.368861    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.375686    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:14.376418    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:14.376418    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.376418    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.376418    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.379518    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.380791    6700 pod_ready.go:92] pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.380871    6700 pod_ready.go:81] duration metric: took 12.0591ms for pod "coredns-7db6d8ff4d-zst2x" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.380871    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.380954    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600
	I0709 10:27:14.380954    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.380954    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.381027    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.384529    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.385878    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:14.385951    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.385951    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.385951    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.389533    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.390517    6700 pod_ready.go:92] pod "etcd-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.390517    6700 pod_ready.go:81] duration metric: took 9.6455ms for pod "etcd-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.390517    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.390517    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m02
	I0709 10:27:14.390517    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.390517    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.390517    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.427658    6700 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0709 10:27:14.428957    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:14.429015    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.429015    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.429015    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.432651    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:14.433374    6700 pod_ready.go:92] pod "etcd-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.433420    6700 pod_ready.go:81] duration metric: took 42.9031ms for pod "etcd-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.433420    6700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.536590    6700 request.go:629] Waited for 103.1699ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m03
	I0709 10:27:14.536900    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/etcd-ha-400600-m03
	I0709 10:27:14.536900    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.536900    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.536900    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.544395    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:14.740969    6700 request.go:629] Waited for 195.8636ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:14.741184    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:14.741386    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.741386    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.741386    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.745404    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:14.747200    6700 pod_ready.go:92] pod "etcd-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:14.747261    6700 pod_ready.go:81] duration metric: took 313.8404ms for pod "etcd-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.747319    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:14.944441    6700 request.go:629] Waited for 197.0043ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600
	I0709 10:27:14.944903    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600
	I0709 10:27:14.944957    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:14.944957    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:14.944957    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:14.950415    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:15.149670    6700 request.go:629] Waited for 198.3095ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:15.149816    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:15.149816    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.149816    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.149816    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.155987    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:15.157141    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:15.157141    6700 pod_ready.go:81] duration metric: took 409.8209ms for pod "kube-apiserver-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.157141    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.335298    6700 request.go:629] Waited for 177.7955ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m02
	I0709 10:27:15.335298    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m02
	I0709 10:27:15.335298    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.335298    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.335298    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.339932    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.539569    6700 request.go:629] Waited for 197.9522ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:15.539672    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:15.539672    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.539672    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.539672    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.544086    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.545985    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:15.545985    6700 pod_ready.go:81] duration metric: took 388.8426ms for pod "kube-apiserver-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.545985    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.745693    6700 request.go:629] Waited for 199.5933ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m03
	I0709 10:27:15.745896    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-400600-m03
	I0709 10:27:15.745896    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.746006    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.746006    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.750473    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.934181    6700 request.go:629] Waited for 181.7701ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:15.934516    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:15.934516    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:15.934516    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:15.934516    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:15.938984    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:15.940780    6700 pod_ready.go:92] pod "kube-apiserver-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:15.940780    6700 pod_ready.go:81] duration metric: took 394.7947ms for pod "kube-apiserver-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:15.940780    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.137064    6700 request.go:629] Waited for 196.1844ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:27:16.137537    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600
	I0709 10:27:16.137537    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.137623    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.137623    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.143355    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:16.341084    6700 request.go:629] Waited for 196.5253ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:16.341564    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:16.341564    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.341564    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.341564    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.348497    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:16.349377    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:16.349377    6700 pod_ready.go:81] duration metric: took 408.5959ms for pod "kube-controller-manager-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.349522    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.546214    6700 request.go:629] Waited for 196.3771ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:27:16.546322    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m02
	I0709 10:27:16.546459    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.546459    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.546459    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.551893    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:16.734346    6700 request.go:629] Waited for 180.5121ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:16.734598    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:16.734812    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.734812    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.734812    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.739241    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:16.740302    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:16.740388    6700 pod_ready.go:81] duration metric: took 390.7789ms for pod "kube-controller-manager-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.740388    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:16.937456    6700 request.go:629] Waited for 196.7992ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:16.937715    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:16.937715    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:16.937715    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:16.937715    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:16.942313    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:17.143614    6700 request.go:629] Waited for 199.5741ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.143879    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.143879    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.143976    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.143976    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.148378    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:17.346530    6700 request.go:629] Waited for 93.3465ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:17.346530    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:17.346530    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.346530    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.346530    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.352714    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:17.548703    6700 request.go:629] Waited for 194.5504ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.548763    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.548763    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.548763    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.548763    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.554340    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:17.755334    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-400600-m03
	I0709 10:27:17.755334    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.755409    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.755409    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.762797    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:17.943994    6700 request.go:629] Waited for 179.6134ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.944208    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:17.944280    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:17.944280    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:17.944280    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:17.954678    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:17.956278    6700 pod_ready.go:92] pod "kube-controller-manager-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:17.956336    6700 pod_ready.go:81] duration metric: took 1.2159447s for pod "kube-controller-manager-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:17.956336    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.148614    6700 request.go:629] Waited for 192.0975ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:27:18.148837    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7k7w8
	I0709 10:27:18.148837    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.148837    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.148837    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.154655    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:18.336921    6700 request.go:629] Waited for 180.6165ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:18.337120    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:18.337120    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.337248    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.337248    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.341516    6700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 10:27:18.342441    6700 pod_ready.go:92] pod "kube-proxy-7k7w8" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:18.342441    6700 pod_ready.go:81] duration metric: took 386.0362ms for pod "kube-proxy-7k7w8" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.342441    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.542799    6700 request.go:629] Waited for 199.8068ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:27:18.542799    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djlzm
	I0709 10:27:18.542799    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.542799    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.542799    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.548800    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:18.747347    6700 request.go:629] Waited for 196.9674ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:18.747567    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:18.747567    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.747567    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.747567    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.753786    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:18.754673    6700 pod_ready.go:92] pod "kube-proxy-djlzm" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:18.754673    6700 pod_ready.go:81] duration metric: took 412.2311ms for pod "kube-proxy-djlzm" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.754673    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7rdj" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:18.936433    6700 request.go:629] Waited for 181.5939ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7rdj
	I0709 10:27:18.936433    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q7rdj
	I0709 10:27:18.936433    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:18.936433    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:18.936433    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:18.941426    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:19.140044    6700 request.go:629] Waited for 197.2849ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:19.140044    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:19.140044    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.140044    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.140044    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.145538    6700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 10:27:19.146906    6700 pod_ready.go:92] pod "kube-proxy-q7rdj" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:19.146999    6700 pod_ready.go:81] duration metric: took 392.232ms for pod "kube-proxy-q7rdj" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.146999    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.343415    6700 request.go:629] Waited for 196.168ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:27:19.343415    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600
	I0709 10:27:19.343645    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.343645    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.343645    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.348909    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:19.533722    6700 request.go:629] Waited for 183.928ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:19.533722    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600
	I0709 10:27:19.533722    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.533722    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.533722    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.541714    6700 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0709 10:27:19.543384    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:19.543384    6700 pod_ready.go:81] duration metric: took 396.3839ms for pod "kube-scheduler-ha-400600" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.543384    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.739484    6700 request.go:629] Waited for 195.6193ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:27:19.739726    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m02
	I0709 10:27:19.739788    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.739813    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.739813    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.744225    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:19.944999    6700 request.go:629] Waited for 199.2037ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:19.944999    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m02
	I0709 10:27:19.944999    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:19.944999    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:19.944999    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:19.951129    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:19.952107    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:19.952221    6700 pod_ready.go:81] duration metric: took 408.7363ms for pod "kube-scheduler-ha-400600-m02" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:19.952221    6700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:20.148888    6700 request.go:629] Waited for 196.157ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m03
	I0709 10:27:20.149022    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-400600-m03
	I0709 10:27:20.149022    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.149022    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.149022    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.159012    6700 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0709 10:27:20.337749    6700 request.go:629] Waited for 177.4541ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:20.338051    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes/ha-400600-m03
	I0709 10:27:20.338051    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.338051    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.338051    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.342665    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:20.344199    6700 pod_ready.go:92] pod "kube-scheduler-ha-400600-m03" in "kube-system" namespace has status "Ready":"True"
	I0709 10:27:20.344199    6700 pod_ready.go:81] duration metric: took 391.9767ms for pod "kube-scheduler-ha-400600-m03" in "kube-system" namespace to be "Ready" ...
	I0709 10:27:20.344199    6700 pod_ready.go:38] duration metric: took 6.003468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 10:27:20.344199    6700 api_server.go:52] waiting for apiserver process to appear ...
	I0709 10:27:20.358274    6700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 10:27:20.388561    6700 api_server.go:72] duration metric: took 15.090664s to wait for apiserver process to appear ...
	I0709 10:27:20.388638    6700 api_server.go:88] waiting for apiserver healthz status ...
	I0709 10:27:20.388688    6700 api_server.go:253] Checking apiserver healthz at https://172.18.204.161:8443/healthz ...
	I0709 10:27:20.399483    6700 api_server.go:279] https://172.18.204.161:8443/healthz returned 200:
	ok
	I0709 10:27:20.399892    6700 round_trippers.go:463] GET https://172.18.204.161:8443/version
	I0709 10:27:20.399990    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.399990    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.399990    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.401575    6700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0709 10:27:20.401956    6700 api_server.go:141] control plane version: v1.30.2
	I0709 10:27:20.401956    6700 api_server.go:131] duration metric: took 13.2685ms to wait for apiserver health ...
	I0709 10:27:20.401956    6700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 10:27:20.537937    6700 request.go:629] Waited for 135.7777ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.538396    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.538396    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.538396    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.538396    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.549615    6700 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0709 10:27:20.559823    6700 system_pods.go:59] 24 kube-system pods found
	I0709 10:27:20.559823    6700 system_pods.go:61] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "etcd-ha-400600-m03" [243b6937-3e8a-4141-9caf-c62c6a5ff30a] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kindnet-9qlks" [902e6330-70e1-4dc7-abdb-c7fbc7bfc051] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-apiserver-ha-400600-m03" [ace87bbb-a5c5-40ca-a4d3-bc49bbc0e75b] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-controller-manager-ha-400600-m03" [c44033e8-cb30-4957-b85c-ae544b56ac2a] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-proxy-q7rdj" [b8c183f7-8c5e-4103-bb6d-177b36a33a55] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-scheduler-ha-400600-m03" [a21ac894-2f56-459b-8c90-fa4539572859] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "kube-vip-ha-400600-m03" [03f3ea79-c50b-4392-8c13-5e9b0c168523] Running
	I0709 10:27:20.559823    6700 system_pods.go:61] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:27:20.559823    6700 system_pods.go:74] duration metric: took 157.8661ms to wait for pod list to return data ...
	I0709 10:27:20.559823    6700 default_sa.go:34] waiting for default service account to be created ...
	I0709 10:27:20.740293    6700 request.go:629] Waited for 180.2859ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:27:20.740293    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/default/serviceaccounts
	I0709 10:27:20.740293    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.740293    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.740595    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.745269    6700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 10:27:20.746403    6700 default_sa.go:45] found service account: "default"
	I0709 10:27:20.746477    6700 default_sa.go:55] duration metric: took 186.6537ms for default service account to be created ...
	I0709 10:27:20.746477    6700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 10:27:20.944581    6700 request.go:629] Waited for 197.9005ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.944844    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/namespaces/kube-system/pods
	I0709 10:27:20.944844    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:20.944914    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:20.944914    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:20.955373    6700 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 10:27:20.965560    6700 system_pods.go:86] 24 kube-system pods found
	I0709 10:27:20.965560    6700 system_pods.go:89] "coredns-7db6d8ff4d-zbxnq" [127df4db-c095-440f-99a7-9292ba82a544] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "coredns-7db6d8ff4d-zst2x" [826902b3-67ea-41ab-8e36-ede312957536] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "etcd-ha-400600" [0ff09041-fa9f-43ec-bc74-714f695696dd] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "etcd-ha-400600-m02" [3b4c61e9-fc5d-4949-9270-1be8dae8a1eb] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "etcd-ha-400600-m03" [243b6937-3e8a-4141-9caf-c62c6a5ff30a] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kindnet-9qlks" [902e6330-70e1-4dc7-abdb-c7fbc7bfc051] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kindnet-fnjm5" [3c5407e2-73e5-4514-a15d-1eb1e4355e09] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kindnet-qjr4d" [323f057b-87f0-43ad-80ba-19045dcf980e] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-apiserver-ha-400600" [8fa85247-6e51-4fac-b7f3-c8d1853320dc] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-apiserver-ha-400600-m02" [325f42b9-5ea2-4beb-b2ad-a922f61684eb] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-apiserver-ha-400600-m03" [ace87bbb-a5c5-40ca-a4d3-bc49bbc0e75b] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-controller-manager-ha-400600" [9d031336-f17a-497c-abe1-5d5a2f0b0fd7] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-controller-manager-ha-400600-m02" [9b9c50f2-b753-4baf-9233-11fe5fecbf08] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-controller-manager-ha-400600-m03" [c44033e8-cb30-4957-b85c-ae544b56ac2a] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-proxy-7k7w8" [048f20f9-b1a5-42d4-877d-e4d1393f1a4d] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-proxy-djlzm" [e73d5dec-dbd4-473d-b100-f3392ddb9445] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-proxy-q7rdj" [b8c183f7-8c5e-4103-bb6d-177b36a33a55] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-scheduler-ha-400600" [ac1ef599-6195-41b1-803a-cf249851ad0b] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-scheduler-ha-400600-m02" [ecbe6536-b868-479c-bfdb-d038c413885e] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-scheduler-ha-400600-m03" [a21ac894-2f56-459b-8c90-fa4539572859] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-vip-ha-400600" [d6b5a66d-c55b-49da-b972-18d29a106ee3] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-vip-ha-400600-m02" [98ea4304-96dd-4840-bafc-427e97b286f3] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "kube-vip-ha-400600-m03" [03f3ea79-c50b-4392-8c13-5e9b0c168523] Running
	I0709 10:27:20.965560    6700 system_pods.go:89] "storage-provisioner" [f4b5ca7f-2c94-4c34-93b8-4977a2b723aa] Running
	I0709 10:27:20.965560    6700 system_pods.go:126] duration metric: took 219.0224ms to wait for k8s-apps to be running ...
	I0709 10:27:20.966080    6700 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 10:27:20.976297    6700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 10:27:21.005793    6700 system_svc.go:56] duration metric: took 39.7125ms WaitForService to wait for kubelet
	I0709 10:27:21.005793    6700 kubeadm.go:576] duration metric: took 15.7078945s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 10:27:21.005793    6700 node_conditions.go:102] verifying NodePressure condition ...
	I0709 10:27:21.133948    6700 request.go:629] Waited for 127.8728ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.204.161:8443/api/v1/nodes
	I0709 10:27:21.134055    6700 round_trippers.go:463] GET https://172.18.204.161:8443/api/v1/nodes
	I0709 10:27:21.134055    6700 round_trippers.go:469] Request Headers:
	I0709 10:27:21.134055    6700 round_trippers.go:473]     Accept: application/json, */*
	I0709 10:27:21.134358    6700 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 10:27:21.141082    6700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 10:27:21.143112    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:27:21.143412    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:27:21.143412    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:27:21.143412    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:27:21.143412    6700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 10:27:21.143412    6700 node_conditions.go:123] node cpu capacity is 2
	I0709 10:27:21.143412    6700 node_conditions.go:105] duration metric: took 137.6184ms to run NodePressure ...
	I0709 10:27:21.143514    6700 start.go:240] waiting for startup goroutines ...
	I0709 10:27:21.143610    6700 start.go:254] writing updated cluster config ...
	I0709 10:27:21.156152    6700 ssh_runner.go:195] Run: rm -f paused
	I0709 10:27:21.302072    6700 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0709 10:27:21.308827    6700 out.go:177] * Done! kubectl is now configured to use "ha-400600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 09 17:19:29 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:19:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/58c2b2ac6f9e2690b6605e899ab9b099d191928e5b3f207ef4c238737600fc46/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:19:29 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:19:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/699b5efc73ef82c252861888d136c55df7adefdec0dc24464f2c7edc7d01ef23/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.866062905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.866798312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.867245717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:29 ha-400600 dockerd[1429]: time="2024-07-09T17:19:29.867936224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:29 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:19:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e32192946816aeee2298c423db0732ff45aa771356c2af4387ded672c3fd128f/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218067855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218486346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218599944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.218895938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.258839910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.259239502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.259405198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:19:30 ha-400600 dockerd[1429]: time="2024-07-09T17:19:30.259756591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349077173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349220773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349241273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:00 ha-400600 dockerd[1429]: time="2024-07-09T17:28:00.349357273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:00 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:28:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe8ea0c55db7cfbdee4483c64424b22daabd3958e9bb8b585b18251b610b05f1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 17:28:01 ha-400600 cri-dockerd[1326]: time="2024-07-09T17:28:01Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.286889643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.287020443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.287036843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 17:28:02 ha-400600 dockerd[1429]: time="2024-07-09T17:28:02.287472145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c38d753e09788       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   fe8ea0c55db7c       busybox-fc5497c4f-q8dt8
	548d2c1ac97b7       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   e32192946816a       coredns-7db6d8ff4d-zst2x
	4ff3baadb8c8f       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   699b5efc73ef8       coredns-7db6d8ff4d-zbxnq
	64effc0264832       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   58c2b2ac6f9e2       storage-provisioner
	eac7b8bb4f49b       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Running             kindnet-cni               0                   5f382f5723fff       kindnet-qjr4d
	42bb9c056d496       53c535741fb44                                                                                         27 minutes ago      Running             kube-proxy                0                   0eadaf19a58a0       kube-proxy-7k7w8
	c25489a3f41d7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   cfb028a1171c9       kube-vip-ha-400600
	e915adad1065b       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   a71d3d70369e8       etcd-ha-400600
	a1cc87b040f15       e874818b3caac                                                                                         27 minutes ago      Running             kube-controller-manager   0                   41930f39bef9d       kube-controller-manager-ha-400600
	fef6bd73c6517       56ce0fd9fb532                                                                                         27 minutes ago      Running             kube-apiserver            0                   71eaea10f68b9       kube-apiserver-ha-400600
	88d916e2452ab       7820c83aa1394                                                                                         27 minutes ago      Running             kube-scheduler            0                   367ca65f8f005       kube-scheduler-ha-400600
	
	
	==> coredns [4ff3baadb8c8] <==
	[INFO] 10.244.0.4:44729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124001s
	[INFO] 10.244.0.4:44116 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.053769591s
	[INFO] 10.244.0.4:55715 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181601s
	[INFO] 10.244.0.4:38152 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028920802s
	[INFO] 10.244.0.4:54687 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000298901s
	[INFO] 10.244.0.4:39755 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000232701s
	[INFO] 10.244.0.4:47376 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000238201s
	[INFO] 10.244.1.2:57447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105601s
	[INFO] 10.244.1.2:45879 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000899s
	[INFO] 10.244.2.2:59081 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074s
	[INFO] 10.244.2.2:48748 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000645s
	[INFO] 10.244.2.2:59259 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001308s
	[INFO] 10.244.0.4:41332 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001594s
	[INFO] 10.244.1.2:38959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120301s
	[INFO] 10.244.1.2:58703 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000261701s
	[INFO] 10.244.1.2:53423 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001448s
	[INFO] 10.244.2.2:38018 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001515s
	[INFO] 10.244.2.2:44098 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001053s
	[INFO] 10.244.2.2:41721 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000598s
	[INFO] 10.244.0.4:50957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119301s
	[INFO] 10.244.1.2:33071 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189s
	[INFO] 10.244.1.2:52032 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000917s
	[INFO] 10.244.2.2:37018 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192101s
	[INFO] 10.244.2.2:42620 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001187s
	[INFO] 10.244.2.2:60585 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001285s
	
	
	==> coredns [548d2c1ac97b] <==
	[INFO] 10.244.2.2:43489 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000733s
	[INFO] 10.244.2.2:45418 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002134s
	[INFO] 10.244.0.4:39856 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002405s
	[INFO] 10.244.1.2:53039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011866542s
	[INFO] 10.244.1.2:49255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000642s
	[INFO] 10.244.1.2:37031 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001088s
	[INFO] 10.244.1.2:33874 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028676502s
	[INFO] 10.244.1.2:41914 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162s
	[INFO] 10.244.1.2:43276 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000944s
	[INFO] 10.244.2.2:37123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227601s
	[INFO] 10.244.2.2:56961 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000663s
	[INFO] 10.244.2.2:40967 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001742s
	[INFO] 10.244.2.2:55610 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001329s
	[INFO] 10.244.2.2:33679 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000129601s
	[INFO] 10.244.0.4:45218 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152501s
	[INFO] 10.244.0.4:43941 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164801s
	[INFO] 10.244.0.4:59289 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000399901s
	[INFO] 10.244.1.2:48110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000693s
	[INFO] 10.244.2.2:59625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001848s
	[INFO] 10.244.0.4:37225 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000254101s
	[INFO] 10.244.0.4:54435 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182901s
	[INFO] 10.244.0.4:51817 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000219301s
	[INFO] 10.244.1.2:41079 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001904s
	[INFO] 10.244.1.2:46791 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104s
	[INFO] 10.244.2.2:34112 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142801s
	
	
	==> describe nodes <==
	Name:               ha-400600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-400600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=ha-400600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T10_19_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 17:19:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-400600
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:46:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 17:43:23 +0000   Tue, 09 Jul 2024 17:19:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 17:43:23 +0000   Tue, 09 Jul 2024 17:19:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 17:43:23 +0000   Tue, 09 Jul 2024 17:19:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 17:43:23 +0000   Tue, 09 Jul 2024 17:19:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.204.161
	  Hostname:    ha-400600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 57e7ded038fc422cb9252166abd87e14
	  System UUID:                1e1a00cd-004d-6e42-b1fb-ad4e24bc426a
	  Boot ID:                    650ebcd8-63b1-4424-9b06-df7a08fde84d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q8dt8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-zbxnq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-zst2x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-400600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-qjr4d                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-400600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-400600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-7k7w8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-400600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-400600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-400600 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x2 over 27m)  kubelet          Node ha-400600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x2 over 27m)  kubelet          Node ha-400600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x2 over 27m)  kubelet          Node ha-400600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                node-controller  Node ha-400600 event: Registered Node ha-400600 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-400600 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node ha-400600 event: Registered Node ha-400600 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-400600 event: Registered Node ha-400600 in Controller
	
	
	Name:               ha-400600-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-400600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=ha-400600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T10_23_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 17:23:00 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-400600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:45:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 09 Jul 2024 17:43:55 +0000   Tue, 09 Jul 2024 17:45:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 09 Jul 2024 17:43:55 +0000   Tue, 09 Jul 2024 17:45:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 09 Jul 2024 17:43:55 +0000   Tue, 09 Jul 2024 17:45:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 09 Jul 2024 17:43:55 +0000   Tue, 09 Jul 2024 17:45:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.18.194.29
	  Hostname:    ha-400600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6190dbe481a94900add605d2b7c6d5ff
	  System UUID:                42e9a45a-f84a-924a-bfd7-75e67dc20830
	  Boot ID:                    17ff975b-644a-48a2-9725-dda2d103583a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sf672                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-400600-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-fnjm5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-400600-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-400600-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-djlzm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-400600-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-400600-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-400600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-400600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-400600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-400600-m02 event: Registered Node ha-400600-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-400600-m02 event: Registered Node ha-400600-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-400600-m02 event: Registered Node ha-400600-m02 in Controller
	  Normal  NodeNotReady             34s                node-controller  Node ha-400600-m02 status is now: NodeNotReady
	
	
	Name:               ha-400600-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-400600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=ha-400600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T10_27_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 17:26:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-400600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:46:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 17:43:48 +0000   Tue, 09 Jul 2024 17:26:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 17:43:48 +0000   Tue, 09 Jul 2024 17:26:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 17:43:48 +0000   Tue, 09 Jul 2024 17:26:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 17:43:48 +0000   Tue, 09 Jul 2024 17:27:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.201.166
	  Hostname:    ha-400600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 59cfe4c3ff2745f9a8a38dc1ed715dde
	  System UUID:                db9566e8-cf7d-9a47-8e9c-ca188d985bba
	  Boot ID:                    2f814ff1-f7a4-447e-8b19-a1452ef7ba03
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wvs72                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-400600-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-9qlks                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-400600-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-400600-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-q7rdj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-400600-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-400600-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-400600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-400600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-400600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-400600-m03 event: Registered Node ha-400600-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-400600-m03 event: Registered Node ha-400600-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-400600-m03 event: Registered Node ha-400600-m03 in Controller
	
	
	Name:               ha-400600-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-400600-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=ha-400600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T10_32_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 17:32:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-400600-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 17:46:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 17:43:04 +0000   Tue, 09 Jul 2024 17:32:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 17:43:04 +0000   Tue, 09 Jul 2024 17:32:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 17:43:04 +0000   Tue, 09 Jul 2024 17:32:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 17:43:04 +0000   Tue, 09 Jul 2024 17:32:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.195.126
	  Hostname:    ha-400600-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d202c848aba47e2b0e555d31373eda0
	  System UUID:                9d127bfc-4def-7d44-9a95-a6d23b8d6f3b
	  Boot ID:                    2f5335b6-f2d9-48a3-b658-d3aa961815e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d57cx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-proxy-q95bn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-400600-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-400600-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-400600-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-400600-m04 event: Registered Node ha-400600-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-400600-m04 event: Registered Node ha-400600-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-400600-m04 event: Registered Node ha-400600-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-400600-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.961487] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +46.989969] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.152280] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul 9 17:18] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.107159] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.528428] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.180999] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.237108] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.834158] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.189260] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.192051] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.284193] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[ +11.560710] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.105947] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.087945] systemd-fstab-generator[1664]: Ignoring "noauto" option for root device
	[  +6.396440] systemd-fstab-generator[1871]: Ignoring "noauto" option for root device
	[  +0.094470] kauditd_printk_skb: 70 callbacks suppressed
	[Jul 9 17:19] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.498662] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[ +15.136047] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.926042] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.544792] kauditd_printk_skb: 33 callbacks suppressed
	[Jul 9 17:42] hrtimer: interrupt took 1980404 ns
	
	
	==> etcd [e915adad1065] <==
	{"level":"warn","ts":"2024-07-09T17:46:23.427623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.439648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.453195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.461634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.484437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.493452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.502392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.50924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.521403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.543486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.565222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.585503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.587467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.597612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.607419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.618347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.626451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.636458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.643826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.648936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.657425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.665496Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.667997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.67648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-09T17:46:23.708492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"77d3936d53d35644","from":"77d3936d53d35644","remote-peer-id":"8881486882be4604","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:46:23 up 29 min,  0 users,  load average: 0.34, 0.34, 0.40
	Linux ha-400600 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [eac7b8bb4f49] <==
	I0709 17:45:50.020012       1 main.go:250] Node ha-400600-m04 has CIDR [10.244.3.0/24] 
	I0709 17:46:00.037719       1 main.go:223] Handling node with IPs: map[172.18.204.161:{}]
	I0709 17:46:00.037883       1 main.go:227] handling current node
	I0709 17:46:00.037901       1 main.go:223] Handling node with IPs: map[172.18.194.29:{}]
	I0709 17:46:00.037909       1 main.go:250] Node ha-400600-m02 has CIDR [10.244.1.0/24] 
	I0709 17:46:00.038035       1 main.go:223] Handling node with IPs: map[172.18.201.166:{}]
	I0709 17:46:00.038065       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	I0709 17:46:00.038128       1 main.go:223] Handling node with IPs: map[172.18.195.126:{}]
	I0709 17:46:00.038189       1 main.go:250] Node ha-400600-m04 has CIDR [10.244.3.0/24] 
	I0709 17:46:10.047721       1 main.go:223] Handling node with IPs: map[172.18.204.161:{}]
	I0709 17:46:10.047822       1 main.go:227] handling current node
	I0709 17:46:10.047838       1 main.go:223] Handling node with IPs: map[172.18.194.29:{}]
	I0709 17:46:10.047846       1 main.go:250] Node ha-400600-m02 has CIDR [10.244.1.0/24] 
	I0709 17:46:10.048315       1 main.go:223] Handling node with IPs: map[172.18.201.166:{}]
	I0709 17:46:10.048425       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	I0709 17:46:10.048940       1 main.go:223] Handling node with IPs: map[172.18.195.126:{}]
	I0709 17:46:10.049303       1 main.go:250] Node ha-400600-m04 has CIDR [10.244.3.0/24] 
	I0709 17:46:20.065890       1 main.go:223] Handling node with IPs: map[172.18.204.161:{}]
	I0709 17:46:20.066001       1 main.go:227] handling current node
	I0709 17:46:20.066019       1 main.go:223] Handling node with IPs: map[172.18.194.29:{}]
	I0709 17:46:20.066028       1 main.go:250] Node ha-400600-m02 has CIDR [10.244.1.0/24] 
	I0709 17:46:20.066574       1 main.go:223] Handling node with IPs: map[172.18.201.166:{}]
	I0709 17:46:20.066681       1 main.go:250] Node ha-400600-m03 has CIDR [10.244.2.0/24] 
	I0709 17:46:20.066900       1 main.go:223] Handling node with IPs: map[172.18.195.126:{}]
	I0709 17:46:20.067064       1 main.go:250] Node ha-400600-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fef6bd73c651] <==
	E0709 17:26:58.430052       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0709 17:26:58.430225       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0709 17:26:58.430398       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.6µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0709 17:26:58.431518       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0709 17:26:58.432599       1 timeout.go:142] post-timeout activity - time-elapsed: 2.573703ms, PATCH "/api/v1/namespaces/default/events/ha-400600-m03.17e09b7cb22757ed" result: <nil>
	E0709 17:28:06.153264       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53128: use of closed network connection
	E0709 17:28:06.649244       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53130: use of closed network connection
	E0709 17:28:07.105622       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53132: use of closed network connection
	E0709 17:28:07.635482       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53134: use of closed network connection
	E0709 17:28:08.126944       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53137: use of closed network connection
	E0709 17:28:08.589925       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53139: use of closed network connection
	E0709 17:28:09.078855       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53141: use of closed network connection
	E0709 17:28:09.536295       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53143: use of closed network connection
	E0709 17:28:09.995780       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53145: use of closed network connection
	E0709 17:28:10.782882       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53148: use of closed network connection
	E0709 17:28:21.213080       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53150: use of closed network connection
	E0709 17:28:21.660883       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53152: use of closed network connection
	E0709 17:28:32.114923       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53154: use of closed network connection
	E0709 17:28:32.587010       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53157: use of closed network connection
	E0709 17:28:43.052063       1 conn.go:339] Error on socket receive: read tcp 172.18.207.254:8443->172.18.192.1:53159: use of closed network connection
	I0709 17:32:14.702721       1 trace.go:236] Trace[775803182]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.18.204.161,type:*v1.Endpoints,resource:apiServerIPInfo (09-Jul-2024 17:32:13.921) (total time: 781ms):
	Trace[775803182]: ---"initial value restored" 288ms (17:32:14.209)
	Trace[775803182]: ---"Transaction prepared" 282ms (17:32:14.492)
	Trace[775803182]: ---"Txn call completed" 210ms (17:32:14.702)
	Trace[775803182]: [781.233299ms] [781.233299ms] END
	
	
	==> kube-controller-manager [a1cc87b040f1] <==
	I0709 17:26:57.596782       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-400600-m03\" does not exist"
	I0709 17:26:57.620896       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-400600-m03" podCIDRs=["10.244.2.0/24"]
	I0709 17:26:59.106707       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-400600-m03"
	I0709 17:27:59.401299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.062004ms"
	I0709 17:27:59.718081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="316.499459ms"
	I0709 17:27:59.870359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="152.125272ms"
	I0709 17:27:59.904628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.613538ms"
	I0709 17:27:59.905217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.5µs"
	I0709 17:27:59.993259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.607351ms"
	I0709 17:27:59.993626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="283.1µs"
	I0709 17:28:00.923962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="192.701µs"
	I0709 17:28:02.692544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.401608ms"
	I0709 17:28:02.762764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.876448ms"
	I0709 17:28:02.762864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.9µs"
	I0709 17:28:03.429454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.097789ms"
	I0709 17:28:03.429767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.2µs"
	E0709 17:32:21.726868       1 certificate_controller.go:146] Sync csr-2x6jj failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2x6jj": the object has been modified; please apply your changes to the latest version and try again
	E0709 17:32:21.727531       1 certificate_controller.go:146] Sync csr-2x6jj failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2x6jj": the object has been modified; please apply your changes to the latest version and try again
	I0709 17:32:21.789062       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-400600-m04\" does not exist"
	I0709 17:32:21.842920       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-400600-m04" podCIDRs=["10.244.3.0/24"]
	I0709 17:32:24.414427       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-400600-m04"
	I0709 17:32:44.619293       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-400600-m04"
	I0709 17:45:49.631552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-400600-m04"
	I0709 17:45:49.933921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.024352ms"
	I0709 17:45:49.935302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.6µs"
	
	
	==> kube-proxy [42bb9c056d49] <==
	I0709 17:19:20.229090       1 server_linux.go:69] "Using iptables proxy"
	I0709 17:19:20.242853       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.204.161"]
	I0709 17:19:20.376245       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 17:19:20.376293       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 17:19:20.376313       1 server_linux.go:165] "Using iptables Proxier"
	I0709 17:19:20.381806       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 17:19:20.382702       1 server.go:872] "Version info" version="v1.30.2"
	I0709 17:19:20.382799       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 17:19:20.384144       1 config.go:192] "Starting service config controller"
	I0709 17:19:20.384291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 17:19:20.384609       1 config.go:101] "Starting endpoint slice config controller"
	I0709 17:19:20.384648       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 17:19:20.385730       1 config.go:319] "Starting node config controller"
	I0709 17:19:20.385761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 17:19:20.484754       1 shared_informer.go:320] Caches are synced for service config
	I0709 17:19:20.486064       1 shared_informer.go:320] Caches are synced for node config
	I0709 17:19:20.486091       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [88d916e2452a] <==
	W0709 17:19:02.503326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.503419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.574447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 17:19:02.574482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 17:19:02.621919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.623662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.662666       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0709 17:19:02.662734       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0709 17:19:02.790862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.791223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.880036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 17:19:02.880536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 17:19:02.889363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0709 17:19:02.889389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0709 17:19:02.897230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 17:19:02.897362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 17:19:02.908558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 17:19:02.908727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 17:19:03.086449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 17:19:03.086608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 17:19:05.328236       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0709 17:26:57.737228       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q7rdj\": pod kube-proxy-q7rdj is already assigned to node \"ha-400600-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q7rdj" node="ha-400600-m03"
	E0709 17:26:57.737423       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b8c183f7-8c5e-4103-bb6d-177b36a33a55(kube-system/kube-proxy-q7rdj) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-q7rdj"
	E0709 17:26:57.738766       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q7rdj\": pod kube-proxy-q7rdj is already assigned to node \"ha-400600-m03\"" pod="kube-system/kube-proxy-q7rdj"
	I0709 17:26:57.738919       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q7rdj" node="ha-400600-m03"
	
	
	==> kubelet <==
	Jul 09 17:42:05 ha-400600 kubelet[2377]: E0709 17:42:05.142625    2377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:42:05 ha-400600 kubelet[2377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:42:05 ha-400600 kubelet[2377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:42:05 ha-400600 kubelet[2377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:42:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 17:43:05 ha-400600 kubelet[2377]: E0709 17:43:05.123043    2377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:43:05 ha-400600 kubelet[2377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:43:05 ha-400600 kubelet[2377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:43:05 ha-400600 kubelet[2377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:43:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 17:44:05 ha-400600 kubelet[2377]: E0709 17:44:05.122064    2377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:44:05 ha-400600 kubelet[2377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:44:05 ha-400600 kubelet[2377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:44:05 ha-400600 kubelet[2377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:44:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 17:45:05 ha-400600 kubelet[2377]: E0709 17:45:05.123511    2377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:45:05 ha-400600 kubelet[2377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:45:05 ha-400600 kubelet[2377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:45:05 ha-400600 kubelet[2377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:45:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 17:46:05 ha-400600 kubelet[2377]: E0709 17:46:05.121865    2377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 17:46:05 ha-400600 kubelet[2377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 17:46:05 ha-400600 kubelet[2377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 17:46:05 ha-400600 kubelet[2377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 17:46:05 ha-400600 kubelet[2377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:46:15.312942    6980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-400600 -n ha-400600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-400600 -n ha-400600: (12.369296s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-400600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (102.76s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (461.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-849000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0709 11:18:04.129615   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 11:18:33.328623   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 11:20:30.103415   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 11:23:04.131879   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-849000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: exit status 90 (7m6.6564202s)

                                                
                                                
-- stdout --
	* [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.18.206.134
	  - NO_PROXY=172.18.206.134
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:16:35.705386   11080 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	* 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-849000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000: (12.0349919s)
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25: (8.3920176s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                    |         Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	| pause   | -p json-output-180400                     | json-output-180400       | testUser          | v1.33.1 | 09 Jul 24 10:57 PDT | 09 Jul 24 10:57 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| unpause | -p json-output-180400                     | json-output-180400       | testUser          | v1.33.1 | 09 Jul 24 10:57 PDT | 09 Jul 24 10:57 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| stop    | -p json-output-180400                     | json-output-180400       | testUser          | v1.33.1 | 09 Jul 24 10:57 PDT | 09 Jul 24 10:58 PDT |
	|         | --output=json --user=testUser             |                          |                   |         |                     |                     |
	| delete  | -p json-output-180400                     | json-output-180400       | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:58 PDT | 09 Jul 24 10:58 PDT |
	| start   | -p json-output-error-590200               | json-output-error-590200 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:58 PDT |                     |
	|         | --memory=2200 --output=json               |                          |                   |         |                     |                     |
	|         | --wait=true --driver=fail                 |                          |                   |         |                     |                     |
	| delete  | -p json-output-error-590200               | json-output-error-590200 | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:58 PDT | 09 Jul 24 10:58 PDT |
	| start   | -p first-090000                           | first-090000             | minikube1\jenkins | v1.33.1 | 09 Jul 24 10:58 PDT | 09 Jul 24 11:01 PDT |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| start   | -p second-090000                          | second-090000            | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:01 PDT | 09 Jul 24 11:05 PDT |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| delete  | -p second-090000                          | second-090000            | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:05 PDT | 09 Jul 24 11:06 PDT |
	| delete  | -p first-090000                           | first-090000             | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:06 PDT | 09 Jul 24 11:07 PDT |
	| start   | -p mount-start-1-823500                   | mount-start-1-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:07 PDT | 09 Jul 24 11:09 PDT |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46464                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-1-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:09 PDT |                     |
	|         | --profile mount-start-1-823500 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46464 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-1-823500 ssh -- ls            | mount-start-1-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:09 PDT | 09 Jul 24 11:09 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| start   | -p mount-start-2-823500                   | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:09 PDT | 09 Jul 24 11:12 PDT |
	|         | --memory=2048 --mount                     |                          |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize               |                          |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                   |                          |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes             |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:12 PDT |                     |
	|         | --profile mount-start-2-823500 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-823500 ssh -- ls            | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:12 PDT | 09 Jul 24 11:12 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-1-823500                   | mount-start-1-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:12 PDT | 09 Jul 24 11:13 PDT |
	|         | --alsologtostderr -v=5                    |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-823500 ssh -- ls            | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:13 PDT | 09 Jul 24 11:13 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| stop    | -p mount-start-2-823500                   | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:13 PDT | 09 Jul 24 11:13 PDT |
	| start   | -p mount-start-2-823500                   | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:13 PDT | 09 Jul 24 11:15 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:15 PDT |                     |
	|         | --profile mount-start-2-823500 --v 0      |                          |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip        |                          |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid |                          |                   |         |                     |                     |
	|         |                                         0 |                          |                   |         |                     |                     |
	| ssh     | mount-start-2-823500 ssh -- ls            | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:15 PDT | 09 Jul 24 11:16 PDT |
	|         | /minikube-host                            |                          |                   |         |                     |                     |
	| delete  | -p mount-start-2-823500                   | mount-start-2-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT | 09 Jul 24 11:16 PDT |
	| delete  | -p mount-start-1-823500                   | mount-start-1-823500     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT | 09 Jul 24 11:16 PDT |
	| start   | -p multinode-849000                       | multinode-849000         | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT |                     |
	|         | --wait=true --memory=2200                 |                          |                   |         |                     |                     |
	|         | --nodes=2 -v=8                            |                          |                   |         |                     |                     |
	|         | --alsologtostderr                         |                          |                   |         |                     |                     |
	|         | --driver=hyperv                           |                          |                   |         |                     |                     |
	|---------|-------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 11:16:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 
	
	
	==> Docker <==
	Jul 09 18:19:58 multinode-849000 dockerd[1440]: time="2024-07-09T18:19:58.762037295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:19:59 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:19:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/668c809456776476167adf8bc4c147738bd39bbc42056eccedf50a86401020d7/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:05 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:05Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240513-cd2ac642: Status: Downloaded newer image for kindest/kindnetd:v20240513-cd2ac642"
	Jul 09 18:20:05 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:05.391830911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:05 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:05.392642009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:05 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:05.392839309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:05 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:05.393030609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.596781392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.596855692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.596873992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597835991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597891091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597983991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968452735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968474235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968801835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.141801596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.142933705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.143853812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.144140014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c150592e658c3       cbb01a7bd410d                                                                              3 minutes ago       Running             coredns                   0                   2a33ce3348449       coredns-7db6d8ff4d-lzsvc
	37c7b8e14dc9c       6e38f40d628db                                                                              3 minutes ago       Running             storage-provisioner       0                   06d8c6b21616c       storage-provisioner
	f3de6fb5f7f77       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8   3 minutes ago       Running             kindnet-cni               0                   668c809456776       kindnet-8ww8c
	02ab9d1727686       53c535741fb44                                                                              4 minutes ago       Running             kube-proxy                0                   0a60f24294838       kube-proxy-qv64t
	0272c56037c7d       3861cfcd7c04c                                                                              4 minutes ago       Running             etcd                      0                   2c574be2cc6d3       etcd-multinode-849000
	8661e349d48ab       7820c83aa1394                                                                              4 minutes ago       Running             kube-scheduler            0                   b9412aa955ab7       kube-scheduler-multinode-849000
	a89ee753e7759       e874818b3caac                                                                              4 minutes ago       Running             kube-controller-manager   0                   a610e3d24fa06       kube-controller-manager-multinode-849000
	556077ae2825d       56ce0fd9fb532                                                                              4 minutes ago       Running             kube-apiserver            0                   2035bb8593f0e       kube-apiserver-multinode-849000
	
	
	==> coredns [c150592e658c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54651 - 41351 "HINFO IN 6752767091270397564.1917026836058955763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104932825s
	
	
	==> describe nodes <==
	Name:               multinode-849000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:23:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:20:13 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:20:13 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:20:13 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:20:13 +0000   Tue, 09 Jul 2024 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.134
	  Hostname:    multinode-849000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af90c209c8a84d288c2d79663fa33a94
	  System UUID:                69e31ac5-0527-9e4a-81b6-556c6bac7963
	  Boot ID:                    5c1387e9-724e-4a1c-a3cc-dde77e8449e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lzsvc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m6s
	  kube-system                 etcd-multinode-849000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m20s
	  kube-system                 kindnet-8ww8c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-apiserver-multinode-849000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-controller-manager-multinode-849000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-proxy-qv64t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-multinode-849000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x7 over 4m28s)  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m7s                   node-controller  Node multinode-849000 event: Registered Node multinode-849000 in Controller
	  Normal  NodeReady                3m56s                  kubelet          Node multinode-849000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.257592] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.061894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 9 18:18] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.172355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Jul 9 18:19] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.106297] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.542997] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.194600] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.225984] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +2.819794] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.174764] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.183052] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.284648] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[ +10.989764] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.110491] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.025456] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.572905] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.100801] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.070675] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.120083] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.551679] systemd-fstab-generator[2475]: Ignoring "noauto" option for root device
	[  +0.193907] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 9 18:20] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [0272c56037c7] <==
	{"level":"info","ts":"2024-07-09T18:19:37.200524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 switched to configuration voters=(1027646240339566960)"}
	{"level":"info","ts":"2024-07-09T18:19:37.205515Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"88434b99d7bbd165","local-member-id":"e42eecf9634a170","added-peer-id":"e42eecf9634a170","added-peer-peer-urls":["https://172.18.206.134:2380"]}
	{"level":"info","ts":"2024-07-09T18:19:37.206249Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-09T18:19:37.214739Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e42eecf9634a170","initial-advertise-peer-urls":["https://172.18.206.134:2380"],"listen-peer-urls":["https://172.18.206.134:2380"],"advertise-client-urls":["https://172.18.206.134:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.18.206.134:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-09T18:19:37.217344Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-09T18:19:37.206363Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.18.206.134:2380"}
	{"level":"info","ts":"2024-07-09T18:19:37.220495Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.18.206.134:2380"}
	{"level":"info","ts":"2024-07-09T18:19:37.796316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-09T18:19:37.796584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-09T18:19:37.796851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 received MsgPreVoteResp from e42eecf9634a170 at term 1"}
	{"level":"info","ts":"2024-07-09T18:19:37.797062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 became candidate at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.79733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 received MsgVoteResp from e42eecf9634a170 at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.797375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 became leader at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.797444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e42eecf9634a170 elected leader e42eecf9634a170 at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.80456Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e42eecf9634a170","local-member-attributes":"{Name:multinode-849000 ClientURLs:[https://172.18.206.134:2379]}","request-path":"/0/members/e42eecf9634a170/attributes","cluster-id":"88434b99d7bbd165","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-09T18:19:37.804755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T18:19:37.804945Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T18:19:37.805302Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.812564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.819296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.819456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.820534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.206.134:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.82294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"88434b99d7bbd165","local-member-id":"e42eecf9634a170","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.8454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.845615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:24:03 up 6 min,  0 users,  load average: 0.14, 0.31, 0.17
	Linux multinode-849000 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3de6fb5f7f7] <==
	I0709 18:21:56.526768       1 main.go:227] handling current node
	I0709 18:22:06.533754       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:22:06.533852       1 main.go:227] handling current node
	I0709 18:22:16.543913       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:22:16.544038       1 main.go:227] handling current node
	I0709 18:22:26.558052       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:22:26.558150       1 main.go:227] handling current node
	I0709 18:22:36.564402       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:22:36.564453       1 main.go:227] handling current node
	I0709 18:22:46.574530       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:22:46.574630       1 main.go:227] handling current node
	I0709 18:22:56.578891       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:22:56.579002       1 main.go:227] handling current node
	I0709 18:23:06.592634       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:23:06.592760       1 main.go:227] handling current node
	I0709 18:23:16.607522       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:23:16.607715       1 main.go:227] handling current node
	I0709 18:23:26.621086       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:23:26.621130       1 main.go:227] handling current node
	I0709 18:23:36.627243       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:23:36.627450       1 main.go:227] handling current node
	I0709 18:23:46.632753       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:23:46.632795       1 main.go:227] handling current node
	I0709 18:23:56.642061       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:23:56.642167       1 main.go:227] handling current node
	
	
	==> kube-apiserver [556077ae2825] <==
	I0709 18:19:39.633120       1 aggregator.go:165] initial CRD sync complete...
	I0709 18:19:39.633153       1 autoregister_controller.go:141] Starting autoregister controller
	I0709 18:19:39.633161       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0709 18:19:39.633166       1 cache.go:39] Caches are synced for autoregister controller
	I0709 18:19:39.636794       1 controller.go:615] quota admission added evaluator for: namespaces
	I0709 18:19:39.638553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 18:19:39.698240       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 18:19:39.700011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 18:19:39.702635       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0709 18:19:39.714433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 18:19:40.505081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0709 18:19:40.517142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0709 18:19:40.517305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 18:19:41.636583       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 18:19:41.706223       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 18:19:41.808149       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0709 18:19:41.821195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.206.134]
	I0709 18:19:41.822637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0709 18:19:41.843642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 18:19:42.609385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 18:19:42.805564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 18:19:42.871569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 18:19:42.907682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 18:19:57.333598       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0709 18:19:57.543081       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a89ee753e775] <==
	I0709 18:19:56.607170       1 shared_informer.go:320] Caches are synced for deployment
	I0709 18:19:56.607828       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0709 18:19:56.608028       1 shared_informer.go:320] Caches are synced for endpoint
	I0709 18:19:56.608559       1 shared_informer.go:320] Caches are synced for job
	I0709 18:19:56.608897       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-849000" podCIDRs=["10.244.0.0/24"]
	I0709 18:19:56.612136       1 shared_informer.go:320] Caches are synced for PV protection
	I0709 18:19:56.613536       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0709 18:19:56.667448       1 shared_informer.go:320] Caches are synced for attach detach
	I0709 18:19:56.718158       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 18:19:56.736984       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 18:19:57.154681       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 18:19:57.154714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0709 18:19:57.208598       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 18:19:57.743180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="172.458844ms"
	I0709 18:19:57.765649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.805292ms"
	I0709 18:19:57.815368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.660854ms"
	I0709 18:19:57.815916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.6µs"
	I0709 18:19:58.007755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.828816ms"
	I0709 18:19:58.026709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.106923ms"
	I0709 18:19:58.029403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.1µs"
	I0709 18:20:07.977654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.049991ms"
	I0709 18:20:08.015594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111µs"
	I0709 18:20:09.991729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.353168ms"
	I0709 18:20:10.001112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="868.106µs"
	I0709 18:20:11.554561       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [02ab9d172768] <==
	I0709 18:19:58.913720       1 server_linux.go:69] "Using iptables proxy"
	I0709 18:19:58.935439       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.134"]
	I0709 18:19:59.002175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 18:19:59.002345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 18:19:59.002422       1 server_linux.go:165] "Using iptables Proxier"
	I0709 18:19:59.006984       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 18:19:59.008394       1 server.go:872] "Version info" version="v1.30.2"
	I0709 18:19:59.008567       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 18:19:59.012208       1 config.go:192] "Starting service config controller"
	I0709 18:19:59.012230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 18:19:59.012257       1 config.go:101] "Starting endpoint slice config controller"
	I0709 18:19:59.012263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 18:19:59.014777       1 config.go:319] "Starting node config controller"
	I0709 18:19:59.015900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 18:19:59.113145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 18:19:59.113150       1 shared_informer.go:320] Caches are synced for service config
	I0709 18:19:59.116402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8661e349d48a] <==
	W0709 18:19:40.760717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 18:19:40.760830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 18:19:40.849864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 18:19:40.850245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 18:19:40.865437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.865496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.872200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 18:19:40.872364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 18:19:40.917325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.917365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.931008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.931093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.976206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 18:19:40.976434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 18:19:41.005485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 18:19:41.005666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 18:19:41.019785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 18:19:41.020146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 18:19:41.110495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 18:19:41.110614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 18:19:41.120707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 18:19:41.122629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 18:19:41.253897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 18:19:41.254338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 18:19:43.553553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 18:20:08 multinode-849000 kubelet[2293]: I0709 18:20:08.084948    2293 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mc4\" (UniqueName: \"kubernetes.io/projected/6c7d1b2d-c741-4944-ad9f-17ee3c9f881e-kube-api-access-s5mc4\") pod \"storage-provisioner\" (UID: \"6c7d1b2d-c741-4944-ad9f-17ee3c9f881e\") " pod="kube-system/storage-provisioner"
	Jul 09 18:20:08 multinode-849000 kubelet[2293]: I0709 18:20:08.085031    2293 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6c7d1b2d-c741-4944-ad9f-17ee3c9f881e-tmp\") pod \"storage-provisioner\" (UID: \"6c7d1b2d-c741-4944-ad9f-17ee3c9f881e\") " pod="kube-system/storage-provisioner"
	Jul 09 18:20:08 multinode-849000 kubelet[2293]: I0709 18:20:08.744573    2293 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea"
	Jul 09 18:20:08 multinode-849000 kubelet[2293]: I0709 18:20:08.836053    2293 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76"
	Jul 09 18:20:09 multinode-849000 kubelet[2293]: I0709 18:20:09.964971    2293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.964952461 podStartE2EDuration="4.964952461s" podCreationTimestamp="2024-07-09 18:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-09 18:20:09.883491722 +0000 UTC m=+27.214085149" watchObservedRunningTime="2024-07-09 18:20:09.964952461 +0000 UTC m=+27.295545988"
	Jul 09 18:20:42 multinode-849000 kubelet[2293]: E0709 18:20:42.972446    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:20:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:20:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:20:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:20:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:21:42 multinode-849000 kubelet[2293]: E0709 18:21:42.974662    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:21:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:21:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:21:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:21:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:22:42 multinode-849000 kubelet[2293]: E0709 18:22:42.973836    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:22:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:22:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:22:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:22:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:23:42 multinode-849000 kubelet[2293]: E0709 18:23:42.983652    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:23:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:23:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:23:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:23:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [37c7b8e14dc9] <==
	I0709 18:20:09.057077       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0709 18:20:09.079655       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0709 18:20:09.079903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0709 18:20:09.126660       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0709 18:20:09.126961       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6!
	I0709 18:20:09.135679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ff72458-ea1d-45ee-8401-48a13fcbb227", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6 became leader
	I0709 18:20:09.242255       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:23:55.376346    4528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000: (11.9606947s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-849000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (461.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (735.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- rollout status deployment/busybox
E0709 11:24:27.366663   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 11:25:30.100069   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 11:28:04.132849   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 11:30:30.103461   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 11:33:04.132482   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- rollout status deployment/busybox: exit status 1 (10m3.7172777s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:24:17.442753   12492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:34:21.154045   13676 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:34:22.664127   15204 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:34:25.287288    9832 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:34:28.639370   10396 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:34:32.599000    4360 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:34:38.178961    1492 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:34:48.287613   12784 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:35:01.173883    9376 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0709 11:35:13.336455   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:35:20.433358    9640 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
E0709 11:35:30.098090   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:35:53.162496    2452 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:524: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --\n** stderr ** \n\tW0709 11:35:53.162496    2452 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n\n** /stderr **"
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- nslookup kubernetes.io: exit status 1 (339.3254ms)

                                                
                                                
** stderr ** 
	W0709 11:35:53.827242    4560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-4hjks does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:538: Pod busybox-fc5497c4f-4hjks could not resolve 'kubernetes.io': exit status 1
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-f2j8m -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-f2j8m -- nslookup kubernetes.io: (1.8393897s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- nslookup kubernetes.default: exit status 1 (355.1574ms)

                                                
                                                
** stderr ** 
	W0709 11:35:56.004406    7492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-4hjks does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:548: Pod busybox-fc5497c4f-4hjks could not resolve 'kubernetes.default': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-f2j8m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (489.5041ms)

                                                
                                                
** stderr ** 
	W0709 11:35:56.931040    3316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-4hjks does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:556: Pod busybox-fc5497c4f-4hjks could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-f2j8m -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000: (12.4048543s)
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25: (8.3573165s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-823500                           | mount-start-2-823500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:13 PDT | 09 Jul 24 11:15 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-823500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:15 PDT |                     |
	|         | --profile mount-start-2-823500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-823500 ssh -- ls                    | mount-start-2-823500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:15 PDT | 09 Jul 24 11:16 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-823500                           | mount-start-2-823500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT | 09 Jul 24 11:16 PDT |
	| delete  | -p mount-start-1-823500                           | mount-start-1-823500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT | 09 Jul 24 11:16 PDT |
	| start   | -p multinode-849000                               | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- apply -f                   | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT | 09 Jul 24 11:24 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- rollout                    | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 11:16:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 
	
	
	==> Docker <==
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597835991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597891091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597983991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968452735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968474235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968801835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.141801596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.142933705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.143853812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.144140014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904534514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904809014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904875715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904980715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:18 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/216d18e70c2fb87f116d16247afca62184ce070d4aca7bbce19d833808db917c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 18:24:19 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285320124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285707025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285773326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285917526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7a0fcb9e869e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   216d18e70c2fb       busybox-fc5497c4f-f2j8m
	c150592e658c3       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   2a33ce3348449       coredns-7db6d8ff4d-lzsvc
	37c7b8e14dc9c       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   06d8c6b21616c       storage-provisioner
	f3de6fb5f7f77       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              16 minutes ago      Running             kindnet-cni               0                   668c809456776       kindnet-8ww8c
	02ab9d1727686       53c535741fb44                                                                                         16 minutes ago      Running             kube-proxy                0                   0a60f24294838       kube-proxy-qv64t
	0272c56037c7d       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   2c574be2cc6d3       etcd-multinode-849000
	8661e349d48ab       7820c83aa1394                                                                                         16 minutes ago      Running             kube-scheduler            0                   b9412aa955ab7       kube-scheduler-multinode-849000
	a89ee753e7759       e874818b3caac                                                                                         16 minutes ago      Running             kube-controller-manager   0                   a610e3d24fa06       kube-controller-manager-multinode-849000
	556077ae2825d       56ce0fd9fb532                                                                                         16 minutes ago      Running             kube-apiserver            0                   2035bb8593f0e       kube-apiserver-multinode-849000
	
	
	==> coredns [c150592e658c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54651 - 41351 "HINFO IN 6752767091270397564.1917026836058955763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104932825s
	[INFO] 10.244.0.3:37665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218301s
	[INFO] 10.244.0.3:33292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.095768808s
	[INFO] 10.244.0.3:51028 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033779908s
	[INFO] 10.244.0.3:52198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.254317433s
	[INFO] 10.244.0.3:58685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001442s
	[INFO] 10.244.0.3:50205 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.085049073s
	[INFO] 10.244.0.3:41462 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002117s
	[INFO] 10.244.0.3:46161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002965s
	[INFO] 10.244.0.3:40010 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038270523s
	[INFO] 10.244.0.3:50213 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181901s
	[INFO] 10.244.0.3:40333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208801s
	[INFO] 10.244.0.3:33479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001618s
	[INFO] 10.244.0.3:44590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223001s
	[INFO] 10.244.0.3:58378 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001694s
	[INFO] 10.244.0.3:35676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.0.3:50088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	
	
	==> describe nodes <==
	Name:               multinode-849000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:36:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.134
	  Hostname:    multinode-849000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af90c209c8a84d288c2d79663fa33a94
	  System UUID:                69e31ac5-0527-9e4a-81b6-556c6bac7963
	  Boot ID:                    5c1387e9-724e-4a1c-a3cc-dde77e8449e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2j8m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-lzsvc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-multinode-849000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-8ww8c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-multinode-849000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-multinode-849000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-qv64t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-multinode-849000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node multinode-849000 event: Registered Node multinode-849000 in Controller
	  Normal  NodeReady                16m                kubelet          Node multinode-849000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.061894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 9 18:18] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.172355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Jul 9 18:19] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.106297] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.542997] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.194600] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.225984] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +2.819794] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.174764] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.183052] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.284648] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[ +10.989764] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.110491] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.025456] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.572905] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.100801] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.070675] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.120083] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.551679] systemd-fstab-generator[2475]: Ignoring "noauto" option for root device
	[  +0.193907] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 9 18:20] kauditd_printk_skb: 51 callbacks suppressed
	[Jul 9 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0272c56037c7] <==
	{"level":"info","ts":"2024-07-09T18:19:37.796851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 received MsgPreVoteResp from e42eecf9634a170 at term 1"}
	{"level":"info","ts":"2024-07-09T18:19:37.797062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 became candidate at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.79733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 received MsgVoteResp from e42eecf9634a170 at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.797375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 became leader at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.797444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e42eecf9634a170 elected leader e42eecf9634a170 at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.80456Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e42eecf9634a170","local-member-attributes":"{Name:multinode-849000 ClientURLs:[https://172.18.206.134:2379]}","request-path":"/0/members/e42eecf9634a170/attributes","cluster-id":"88434b99d7bbd165","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-09T18:19:37.804755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T18:19:37.804945Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T18:19:37.805302Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.812564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.819296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.819456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.820534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.206.134:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.82294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"88434b99d7bbd165","local-member-id":"e42eecf9634a170","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.8454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.845615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:29:37.886741Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-07-09T18:29:37.900514Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":687,"took":"13.301342ms","hash":2108544045,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2121728,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-09T18:29:37.900644Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2108544045,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-07-09T18:34:37.903933Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-09T18:34:37.912189Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"7.652225ms","hash":1821337612,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:34:37.912513Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1821337612,"revision":927,"compact-revision":687}
	{"level":"info","ts":"2024-07-09T18:35:57.287138Z","caller":"traceutil/trace.go:171","msg":"trace[1176997031] linearizableReadLoop","detail":"{readStateIndex:1442; appliedIndex:1441; }","duration":"158.59851ms","start":"2024-07-09T18:35:57.12852Z","end":"2024-07-09T18:35:57.287118Z","steps":["trace[1176997031] 'read index received'  (duration: 137.916144ms)","trace[1176997031] 'applied index is now lower than readState.Index'  (duration: 20.680866ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:35:57.287544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.000512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-4hjks\" ","response":"range_response_count:1 size:2221"}
	{"level":"info","ts":"2024-07-09T18:35:57.287811Z","caller":"traceutil/trace.go:171","msg":"trace[632773735] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-4hjks; range_end:; response_count:1; response_revision:1233; }","duration":"159.270012ms","start":"2024-07-09T18:35:57.128515Z","end":"2024-07-09T18:35:57.287785Z","steps":["trace[632773735] 'agreement among raft nodes before linearized reading'  (duration: 158.812611ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:36:18 up 18 min,  0 users,  load average: 0.89, 0.71, 0.43
	Linux multinode-849000 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3de6fb5f7f7] <==
	I0709 18:34:17.267379       1 main.go:227] handling current node
	I0709 18:34:27.280339       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:34:27.280527       1 main.go:227] handling current node
	I0709 18:34:37.294152       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:34:37.294184       1 main.go:227] handling current node
	I0709 18:34:47.304862       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:34:47.305006       1 main.go:227] handling current node
	I0709 18:34:57.309940       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:34:57.310053       1 main.go:227] handling current node
	I0709 18:35:07.323091       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:07.323167       1 main.go:227] handling current node
	I0709 18:35:17.336093       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:17.336185       1 main.go:227] handling current node
	I0709 18:35:27.341401       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:27.341436       1 main.go:227] handling current node
	I0709 18:35:37.356864       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:37.356887       1 main.go:227] handling current node
	I0709 18:35:47.364672       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:47.365207       1 main.go:227] handling current node
	I0709 18:35:57.378884       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:57.379004       1 main.go:227] handling current node
	I0709 18:36:07.387740       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:07.387857       1 main.go:227] handling current node
	I0709 18:36:17.401563       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:17.401925       1 main.go:227] handling current node
	
	
	==> kube-apiserver [556077ae2825] <==
	I0709 18:19:39.633166       1 cache.go:39] Caches are synced for autoregister controller
	I0709 18:19:39.636794       1 controller.go:615] quota admission added evaluator for: namespaces
	I0709 18:19:39.638553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 18:19:39.698240       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 18:19:39.700011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 18:19:39.702635       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0709 18:19:39.714433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 18:19:40.505081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0709 18:19:40.517142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0709 18:19:40.517305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 18:19:41.636583       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 18:19:41.706223       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 18:19:41.808149       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0709 18:19:41.821195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.206.134]
	I0709 18:19:41.822637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0709 18:19:41.843642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 18:19:42.609385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 18:19:42.805564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 18:19:42.871569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 18:19:42.907682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 18:19:57.333598       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0709 18:19:57.543081       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0709 18:35:55.870544       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53940: use of closed network connection
	E0709 18:35:56.795209       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53945: use of closed network connection
	E0709 18:35:57.698486       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53950: use of closed network connection
	
	
	==> kube-controller-manager [a89ee753e775] <==
	I0709 18:19:56.612136       1 shared_informer.go:320] Caches are synced for PV protection
	I0709 18:19:56.613536       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0709 18:19:56.667448       1 shared_informer.go:320] Caches are synced for attach detach
	I0709 18:19:56.718158       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 18:19:56.736984       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 18:19:57.154681       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 18:19:57.154714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0709 18:19:57.208598       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 18:19:57.743180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="172.458844ms"
	I0709 18:19:57.765649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.805292ms"
	I0709 18:19:57.815368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.660854ms"
	I0709 18:19:57.815916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.6µs"
	I0709 18:19:58.007755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.828816ms"
	I0709 18:19:58.026709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.106923ms"
	I0709 18:19:58.029403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.1µs"
	I0709 18:20:07.977654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.049991ms"
	I0709 18:20:08.015594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111µs"
	I0709 18:20:09.991729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.353168ms"
	I0709 18:20:10.001112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="868.106µs"
	I0709 18:20:11.554561       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0709 18:24:17.420348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.233775ms"
	I0709 18:24:17.441694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.911551ms"
	I0709 18:24:17.444364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.629006ms"
	I0709 18:24:20.165672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.094324ms"
	I0709 18:24:20.166173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	
	
	==> kube-proxy [02ab9d172768] <==
	I0709 18:19:58.913720       1 server_linux.go:69] "Using iptables proxy"
	I0709 18:19:58.935439       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.134"]
	I0709 18:19:59.002175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 18:19:59.002345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 18:19:59.002422       1 server_linux.go:165] "Using iptables Proxier"
	I0709 18:19:59.006984       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 18:19:59.008394       1 server.go:872] "Version info" version="v1.30.2"
	I0709 18:19:59.008567       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 18:19:59.012208       1 config.go:192] "Starting service config controller"
	I0709 18:19:59.012230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 18:19:59.012257       1 config.go:101] "Starting endpoint slice config controller"
	I0709 18:19:59.012263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 18:19:59.014777       1 config.go:319] "Starting node config controller"
	I0709 18:19:59.015900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 18:19:59.113145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 18:19:59.113150       1 shared_informer.go:320] Caches are synced for service config
	I0709 18:19:59.116402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8661e349d48a] <==
	W0709 18:19:40.760717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 18:19:40.760830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 18:19:40.849864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 18:19:40.850245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 18:19:40.865437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.865496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.872200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 18:19:40.872364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 18:19:40.917325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.917365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.931008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.931093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.976206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 18:19:40.976434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 18:19:41.005485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 18:19:41.005666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 18:19:41.019785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 18:19:41.020146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 18:19:41.110495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 18:19:41.110614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 18:19:41.120707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 18:19:41.122629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 18:19:41.253897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 18:19:41.254338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 18:19:43.553553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 18:31:42 multinode-849000 kubelet[2293]: E0709 18:31:42.972227    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:31:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:31:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:31:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:31:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:32:42 multinode-849000 kubelet[2293]: E0709 18:32:42.973133    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:33:42 multinode-849000 kubelet[2293]: E0709 18:33:42.972677    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:34:42 multinode-849000 kubelet[2293]: E0709 18:34:42.972640    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:35:42 multinode-849000 kubelet[2293]: E0709 18:35:42.970822    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [37c7b8e14dc9] <==
	I0709 18:20:09.057077       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0709 18:20:09.079655       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0709 18:20:09.079903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0709 18:20:09.126660       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0709 18:20:09.126961       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6!
	I0709 18:20:09.135679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ff72458-ea1d-45ee-8401-48a13fcbb227", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6 became leader
	I0709 18:20:09.242255       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:36:10.243372    2360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000: (12.1718961s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-849000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-4hjks
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-849000 describe pod busybox-fc5497c4f-4hjks
helpers_test.go:282: (dbg) kubectl --context multinode-849000 describe pod busybox-fc5497c4f-4hjks:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-4hjks
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl8dk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hl8dk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  108s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (735.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (45.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:572: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-4hjks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (348.1702ms)

                                                
                                                
** stderr ** 
	W0709 11:36:32.542498   13508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error from server (BadRequest): pod busybox-fc5497c4f-4hjks does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:574: Pod busybox-fc5497c4f-4hjks could not resolve 'host.minikube.internal': exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-f2j8m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-f2j8m -- sh -c "ping -c 1 172.18.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-849000 -- exec busybox-fc5497c4f-f2j8m -- sh -c "ping -c 1 172.18.192.1": exit status 1 (10.4083712s)

                                                
                                                
-- stdout --
	PING 172.18.192.1 (172.18.192.1): 56 data bytes
	
	--- 172.18.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:36:33.308347    5912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.18.192.1) from pod (busybox-fc5497c4f-f2j8m): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000: (12.0448152s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25: (8.4536955s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-823500                           | mount-start-1-823500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT | 09 Jul 24 11:16 PDT |
	| start   | -p multinode-849000                               | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- apply -f                   | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT | 09 Jul 24 11:24 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- rollout                    | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT |                     |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-4hjks                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | busybox-fc5497c4f-f2j8m                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000     | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-f2j8m -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 11:16:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 
	
	
	==> Docker <==
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597835991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597891091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597983991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968452735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968474235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968801835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.141801596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.142933705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.143853812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.144140014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904534514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904809014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904875715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904980715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:18 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/216d18e70c2fb87f116d16247afca62184ce070d4aca7bbce19d833808db917c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 18:24:19 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285320124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285707025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285773326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285917526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7a0fcb9e869e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   216d18e70c2fb       busybox-fc5497c4f-f2j8m
	c150592e658c3       cbb01a7bd410d                                                                                         16 minutes ago      Running             coredns                   0                   2a33ce3348449       coredns-7db6d8ff4d-lzsvc
	37c7b8e14dc9c       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   06d8c6b21616c       storage-provisioner
	f3de6fb5f7f77       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              16 minutes ago      Running             kindnet-cni               0                   668c809456776       kindnet-8ww8c
	02ab9d1727686       53c535741fb44                                                                                         17 minutes ago      Running             kube-proxy                0                   0a60f24294838       kube-proxy-qv64t
	0272c56037c7d       3861cfcd7c04c                                                                                         17 minutes ago      Running             etcd                      0                   2c574be2cc6d3       etcd-multinode-849000
	8661e349d48ab       7820c83aa1394                                                                                         17 minutes ago      Running             kube-scheduler            0                   b9412aa955ab7       kube-scheduler-multinode-849000
	a89ee753e7759       e874818b3caac                                                                                         17 minutes ago      Running             kube-controller-manager   0                   a610e3d24fa06       kube-controller-manager-multinode-849000
	556077ae2825d       56ce0fd9fb532                                                                                         17 minutes ago      Running             kube-apiserver            0                   2035bb8593f0e       kube-apiserver-multinode-849000
	
	
	==> coredns [c150592e658c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54651 - 41351 "HINFO IN 6752767091270397564.1917026836058955763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104932825s
	[INFO] 10.244.0.3:37665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218301s
	[INFO] 10.244.0.3:33292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.095768808s
	[INFO] 10.244.0.3:51028 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033779908s
	[INFO] 10.244.0.3:52198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.254317433s
	[INFO] 10.244.0.3:58685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001442s
	[INFO] 10.244.0.3:50205 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.085049073s
	[INFO] 10.244.0.3:41462 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002117s
	[INFO] 10.244.0.3:46161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002965s
	[INFO] 10.244.0.3:40010 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038270523s
	[INFO] 10.244.0.3:50213 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181901s
	[INFO] 10.244.0.3:40333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208801s
	[INFO] 10.244.0.3:33479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001618s
	[INFO] 10.244.0.3:44590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223001s
	[INFO] 10.244.0.3:58378 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001694s
	[INFO] 10.244.0.3:35676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.0.3:50088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	[INFO] 10.244.0.3:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000289801s
	[INFO] 10.244.0.3:33623 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197201s
	[INFO] 10.244.0.3:60126 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001055s
	[INFO] 10.244.0.3:44284 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150901s
	
	
	==> describe nodes <==
	Name:               multinode-849000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:36:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:34:59 +0000   Tue, 09 Jul 2024 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.134
	  Hostname:    multinode-849000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af90c209c8a84d288c2d79663fa33a94
	  System UUID:                69e31ac5-0527-9e4a-81b6-556c6bac7963
	  Boot ID:                    5c1387e9-724e-4a1c-a3cc-dde77e8449e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2j8m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-lzsvc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-multinode-849000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-8ww8c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-multinode-849000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-multinode-849000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-qv64t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-multinode-849000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node multinode-849000 event: Registered Node multinode-849000 in Controller
	  Normal  NodeReady                16m                kubelet          Node multinode-849000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.061894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 9 18:18] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.172355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Jul 9 18:19] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.106297] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.542997] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.194600] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.225984] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +2.819794] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.174764] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.183052] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.284648] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[ +10.989764] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.110491] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.025456] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.572905] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.100801] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.070675] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.120083] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.551679] systemd-fstab-generator[2475]: Ignoring "noauto" option for root device
	[  +0.193907] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 9 18:20] kauditd_printk_skb: 51 callbacks suppressed
	[Jul 9 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0272c56037c7] <==
	{"level":"info","ts":"2024-07-09T18:19:37.796851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 received MsgPreVoteResp from e42eecf9634a170 at term 1"}
	{"level":"info","ts":"2024-07-09T18:19:37.797062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 became candidate at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.79733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 received MsgVoteResp from e42eecf9634a170 at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.797375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e42eecf9634a170 became leader at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.797444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e42eecf9634a170 elected leader e42eecf9634a170 at term 2"}
	{"level":"info","ts":"2024-07-09T18:19:37.80456Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e42eecf9634a170","local-member-attributes":"{Name:multinode-849000 ClientURLs:[https://172.18.206.134:2379]}","request-path":"/0/members/e42eecf9634a170/attributes","cluster-id":"88434b99d7bbd165","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-09T18:19:37.804755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T18:19:37.804945Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-09T18:19:37.805302Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.812564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.819296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.819456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.820534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.206.134:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.82294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"88434b99d7bbd165","local-member-id":"e42eecf9634a170","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.8454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.845615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:29:37.886741Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-07-09T18:29:37.900514Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":687,"took":"13.301342ms","hash":2108544045,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2121728,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-09T18:29:37.900644Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2108544045,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-07-09T18:34:37.903933Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-09T18:34:37.912189Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"7.652225ms","hash":1821337612,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:34:37.912513Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1821337612,"revision":927,"compact-revision":687}
	{"level":"info","ts":"2024-07-09T18:35:57.287138Z","caller":"traceutil/trace.go:171","msg":"trace[1176997031] linearizableReadLoop","detail":"{readStateIndex:1442; appliedIndex:1441; }","duration":"158.59851ms","start":"2024-07-09T18:35:57.12852Z","end":"2024-07-09T18:35:57.287118Z","steps":["trace[1176997031] 'read index received'  (duration: 137.916144ms)","trace[1176997031] 'applied index is now lower than readState.Index'  (duration: 20.680866ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:35:57.287544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.000512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-4hjks\" ","response":"range_response_count:1 size:2221"}
	{"level":"info","ts":"2024-07-09T18:35:57.287811Z","caller":"traceutil/trace.go:171","msg":"trace[632773735] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-4hjks; range_end:; response_count:1; response_revision:1233; }","duration":"159.270012ms","start":"2024-07-09T18:35:57.128515Z","end":"2024-07-09T18:35:57.287785Z","steps":["trace[632773735] 'agreement among raft nodes before linearized reading'  (duration: 158.812611ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:37:03 up 19 min,  0 users,  load average: 0.42, 0.61, 0.40
	Linux multinode-849000 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3de6fb5f7f7] <==
	I0709 18:34:57.310053       1 main.go:227] handling current node
	I0709 18:35:07.323091       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:07.323167       1 main.go:227] handling current node
	I0709 18:35:17.336093       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:17.336185       1 main.go:227] handling current node
	I0709 18:35:27.341401       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:27.341436       1 main.go:227] handling current node
	I0709 18:35:37.356864       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:37.356887       1 main.go:227] handling current node
	I0709 18:35:47.364672       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:47.365207       1 main.go:227] handling current node
	I0709 18:35:57.378884       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:35:57.379004       1 main.go:227] handling current node
	I0709 18:36:07.387740       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:07.387857       1 main.go:227] handling current node
	I0709 18:36:17.401563       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:17.401925       1 main.go:227] handling current node
	I0709 18:36:27.412337       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:27.412489       1 main.go:227] handling current node
	I0709 18:36:37.418935       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:37.419045       1 main.go:227] handling current node
	I0709 18:36:47.433205       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:47.433251       1 main.go:227] handling current node
	I0709 18:36:57.438386       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:36:57.438492       1 main.go:227] handling current node
	
	
	==> kube-apiserver [556077ae2825] <==
	I0709 18:19:39.638553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 18:19:39.698240       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 18:19:39.700011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 18:19:39.702635       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0709 18:19:39.714433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 18:19:40.505081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0709 18:19:40.517142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0709 18:19:40.517305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 18:19:41.636583       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 18:19:41.706223       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 18:19:41.808149       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0709 18:19:41.821195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.206.134]
	I0709 18:19:41.822637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0709 18:19:41.843642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 18:19:42.609385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 18:19:42.805564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 18:19:42.871569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 18:19:42.907682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 18:19:57.333598       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0709 18:19:57.543081       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0709 18:35:55.870544       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53940: use of closed network connection
	E0709 18:35:56.795209       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53945: use of closed network connection
	E0709 18:35:57.698486       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53950: use of closed network connection
	E0709 18:36:33.178526       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53970: use of closed network connection
	E0709 18:36:43.597768       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53972: use of closed network connection
	
	
	==> kube-controller-manager [a89ee753e775] <==
	I0709 18:19:56.612136       1 shared_informer.go:320] Caches are synced for PV protection
	I0709 18:19:56.613536       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0709 18:19:56.667448       1 shared_informer.go:320] Caches are synced for attach detach
	I0709 18:19:56.718158       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 18:19:56.736984       1 shared_informer.go:320] Caches are synced for resource quota
	I0709 18:19:57.154681       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 18:19:57.154714       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0709 18:19:57.208598       1 shared_informer.go:320] Caches are synced for garbage collector
	I0709 18:19:57.743180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="172.458844ms"
	I0709 18:19:57.765649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.805292ms"
	I0709 18:19:57.815368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.660854ms"
	I0709 18:19:57.815916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.6µs"
	I0709 18:19:58.007755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.828816ms"
	I0709 18:19:58.026709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.106923ms"
	I0709 18:19:58.029403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.1µs"
	I0709 18:20:07.977654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.049991ms"
	I0709 18:20:08.015594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111µs"
	I0709 18:20:09.991729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.353168ms"
	I0709 18:20:10.001112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="868.106µs"
	I0709 18:20:11.554561       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0709 18:24:17.420348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.233775ms"
	I0709 18:24:17.441694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.911551ms"
	I0709 18:24:17.444364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.629006ms"
	I0709 18:24:20.165672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.094324ms"
	I0709 18:24:20.166173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	
	
	==> kube-proxy [02ab9d172768] <==
	I0709 18:19:58.913720       1 server_linux.go:69] "Using iptables proxy"
	I0709 18:19:58.935439       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.134"]
	I0709 18:19:59.002175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 18:19:59.002345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 18:19:59.002422       1 server_linux.go:165] "Using iptables Proxier"
	I0709 18:19:59.006984       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 18:19:59.008394       1 server.go:872] "Version info" version="v1.30.2"
	I0709 18:19:59.008567       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 18:19:59.012208       1 config.go:192] "Starting service config controller"
	I0709 18:19:59.012230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 18:19:59.012257       1 config.go:101] "Starting endpoint slice config controller"
	I0709 18:19:59.012263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 18:19:59.014777       1 config.go:319] "Starting node config controller"
	I0709 18:19:59.015900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 18:19:59.113145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 18:19:59.113150       1 shared_informer.go:320] Caches are synced for service config
	I0709 18:19:59.116402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8661e349d48a] <==
	W0709 18:19:40.760717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 18:19:40.760830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 18:19:40.849864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 18:19:40.850245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 18:19:40.865437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.865496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.872200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 18:19:40.872364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 18:19:40.917325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.917365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.931008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.931093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.976206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 18:19:40.976434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 18:19:41.005485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 18:19:41.005666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 18:19:41.019785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 18:19:41.020146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 18:19:41.110495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 18:19:41.110614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 18:19:41.120707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 18:19:41.122629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 18:19:41.253897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 18:19:41.254338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 18:19:43.553553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 18:32:42 multinode-849000 kubelet[2293]: E0709 18:32:42.973133    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:32:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:33:42 multinode-849000 kubelet[2293]: E0709 18:33:42.972677    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:33:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:34:42 multinode-849000 kubelet[2293]: E0709 18:34:42.972640    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:34:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:35:42 multinode-849000 kubelet[2293]: E0709 18:35:42.970822    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:35:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:36:42 multinode-849000 kubelet[2293]: E0709 18:36:42.972406    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [37c7b8e14dc9] <==
	I0709 18:20:09.057077       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0709 18:20:09.079655       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0709 18:20:09.079903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0709 18:20:09.126660       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0709 18:20:09.126961       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6!
	I0709 18:20:09.135679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ff72458-ea1d-45ee-8401-48a13fcbb227", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6 became leader
	I0709 18:20:09.242255       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-849000_6de5186f-60e7-46e7-ab51-a1dcafaef8f6!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:36:55.771871    5708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000: (12.2862413s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-849000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-4hjks
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-849000 describe pod busybox-fc5497c4f-4hjks
helpers_test.go:282: (dbg) kubectl --context multinode-849000 describe pod busybox-fc5497c4f-4hjks:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-4hjks
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl8dk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hl8dk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m33s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (45.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (266.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-849000 -v 3 --alsologtostderr
E0709 11:38:04.139867   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 11:40:30.098442   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-849000 -v 3 --alsologtostderr: (3m16.4711722s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status --alsologtostderr
E0709 11:41:07.372512   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status --alsologtostderr: exit status 2 (35.9318961s)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-849000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-849000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:40:34.448462    4872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 11:40:34.457402    4872 out.go:291] Setting OutFile to fd 1800 ...
	I0709 11:40:34.457402    4872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:40:34.457402    4872 out.go:304] Setting ErrFile to fd 1540...
	I0709 11:40:34.457402    4872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:40:34.474112    4872 out.go:298] Setting JSON to false
	I0709 11:40:34.475104    4872 mustload.go:65] Loading cluster: multinode-849000
	I0709 11:40:34.475104    4872 notify.go:220] Checking for updates...
	I0709 11:40:34.475104    4872 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:40:34.476070    4872 status.go:255] checking status of multinode-849000 ...
	I0709 11:40:34.477072    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:40:36.670387    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:40:36.670706    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:36.670706    4872 status.go:330] multinode-849000 host status = "Running" (err=<nil>)
	I0709 11:40:36.670827    4872 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:40:36.671444    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:40:38.870859    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:40:38.870951    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:38.871025    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:40:41.415451    4872 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:40:41.415552    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:41.415552    4872 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:40:41.428155    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:40:41.428155    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:40:43.591795    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:40:43.591795    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:43.591795    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:40:46.173415    4872 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:40:46.173415    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:46.174391    4872 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:40:46.267513    4872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8393415s)
	I0709 11:40:46.279992    4872 ssh_runner.go:195] Run: systemctl --version
	I0709 11:40:46.304121    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:40:46.332606    4872 kubeconfig.go:125] found "multinode-849000" server: "https://172.18.206.134:8443"
	I0709 11:40:46.332737    4872 api_server.go:166] Checking apiserver status ...
	I0709 11:40:46.346242    4872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:40:46.387067    4872 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2115/cgroup
	W0709 11:40:46.406520    4872 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 11:40:46.418846    4872 ssh_runner.go:195] Run: ls
	I0709 11:40:46.426443    4872 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:40:46.433819    4872 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:40:46.434154    4872 status.go:422] multinode-849000 apiserver status = Running (err=<nil>)
	I0709 11:40:46.434220    4872 status.go:257] multinode-849000 status: &{Name:multinode-849000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 11:40:46.434220    4872 status.go:255] checking status of multinode-849000-m02 ...
	I0709 11:40:46.434987    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:40:48.600990    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:40:48.601054    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:48.601054    4872 status.go:330] multinode-849000-m02 host status = "Running" (err=<nil>)
	I0709 11:40:48.601054    4872 host.go:66] Checking if "multinode-849000-m02" exists ...
	I0709 11:40:48.601957    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:40:50.757781    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:40:50.757859    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:50.757859    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:40:53.302472    4872 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:40:53.303181    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:53.303240    4872 host.go:66] Checking if "multinode-849000-m02" exists ...
	I0709 11:40:53.316642    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:40:53.316642    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:40:55.500764    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:40:55.500764    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:55.501472    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:40:58.075415    4872 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:40:58.075450    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:40:58.075643    4872 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:40:58.180783    4872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8641234s)
	I0709 11:40:58.192373    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:40:58.216751    4872 status.go:257] multinode-849000-m02 status: &{Name:multinode-849000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0709 11:40:58.216841    4872 status.go:255] checking status of multinode-849000-m03 ...
	I0709 11:40:58.217259    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:41:00.412438    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:41:00.412438    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:41:00.412438    4872 status.go:330] multinode-849000-m03 host status = "Running" (err=<nil>)
	I0709 11:41:00.412438    4872 host.go:66] Checking if "multinode-849000-m03" exists ...
	I0709 11:41:00.413891    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:41:02.667369    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:41:02.667369    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:41:02.667827    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:41:05.276846    4872 main.go:141] libmachine: [stdout =====>] : 172.18.196.236
	
	I0709 11:41:05.276846    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:41:05.276846    4872 host.go:66] Checking if "multinode-849000-m03" exists ...
	I0709 11:41:05.291263    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:41:05.291263    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:41:07.505701    4872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:41:07.505701    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:41:07.506375    4872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:41:10.103410    4872 main.go:141] libmachine: [stdout =====>] : 172.18.196.236
	
	I0709 11:41:10.103941    4872 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:41:10.104013    4872 sshutil.go:53] new ssh client: &{IP:172.18.196.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m03\id_rsa Username:docker}
	I0709 11:41:10.209857    4872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9185761s)
	I0709 11:41:10.221510    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:41:10.244121    4872 status.go:257] multinode-849000-m03 status: &{Name:multinode-849000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:129: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-849000 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000: (12.0989946s)
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25: (8.394241s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-849000                               | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- apply -f                   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT | 09 Jul 24 11:24 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- rollout                    | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-4hjks                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | busybox-fc5497c4f-f2j8m                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-f2j8m -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-849000 -v 3                      | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:37 PDT | 09 Jul 24 11:40 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 11:16:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 
	
	
	==> Docker <==
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597835991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597891091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597983991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968452735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968474235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968801835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.141801596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.142933705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.143853812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.144140014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904534514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904809014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904875715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904980715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:18 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/216d18e70c2fb87f116d16247afca62184ce070d4aca7bbce19d833808db917c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 18:24:19 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285320124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285707025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285773326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285917526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7a0fcb9e869e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   216d18e70c2fb       busybox-fc5497c4f-f2j8m
	c150592e658c3       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   2a33ce3348449       coredns-7db6d8ff4d-lzsvc
	37c7b8e14dc9c       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   06d8c6b21616c       storage-provisioner
	f3de6fb5f7f77       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              21 minutes ago      Running             kindnet-cni               0                   668c809456776       kindnet-8ww8c
	02ab9d1727686       53c535741fb44                                                                                         21 minutes ago      Running             kube-proxy                0                   0a60f24294838       kube-proxy-qv64t
	0272c56037c7d       3861cfcd7c04c                                                                                         21 minutes ago      Running             etcd                      0                   2c574be2cc6d3       etcd-multinode-849000
	8661e349d48ab       7820c83aa1394                                                                                         21 minutes ago      Running             kube-scheduler            0                   b9412aa955ab7       kube-scheduler-multinode-849000
	a89ee753e7759       e874818b3caac                                                                                         21 minutes ago      Running             kube-controller-manager   0                   a610e3d24fa06       kube-controller-manager-multinode-849000
	556077ae2825d       56ce0fd9fb532                                                                                         21 minutes ago      Running             kube-apiserver            0                   2035bb8593f0e       kube-apiserver-multinode-849000
	
	
	==> coredns [c150592e658c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54651 - 41351 "HINFO IN 6752767091270397564.1917026836058955763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104932825s
	[INFO] 10.244.0.3:37665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218301s
	[INFO] 10.244.0.3:33292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.095768808s
	[INFO] 10.244.0.3:51028 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033779908s
	[INFO] 10.244.0.3:52198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.254317433s
	[INFO] 10.244.0.3:58685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001442s
	[INFO] 10.244.0.3:50205 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.085049073s
	[INFO] 10.244.0.3:41462 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002117s
	[INFO] 10.244.0.3:46161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002965s
	[INFO] 10.244.0.3:40010 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038270523s
	[INFO] 10.244.0.3:50213 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181901s
	[INFO] 10.244.0.3:40333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208801s
	[INFO] 10.244.0.3:33479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001618s
	[INFO] 10.244.0.3:44590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223001s
	[INFO] 10.244.0.3:58378 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001694s
	[INFO] 10.244.0.3:35676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.0.3:50088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	[INFO] 10.244.0.3:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000289801s
	[INFO] 10.244.0.3:33623 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197201s
	[INFO] 10.244.0.3:60126 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001055s
	[INFO] 10.244.0.3:44284 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150901s
	
	
	==> describe nodes <==
	Name:               multinode-849000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:41:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.134
	  Hostname:    multinode-849000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af90c209c8a84d288c2d79663fa33a94
	  System UUID:                69e31ac5-0527-9e4a-81b6-556c6bac7963
	  Boot ID:                    5c1387e9-724e-4a1c-a3cc-dde77e8449e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2j8m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-lzsvc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-multinode-849000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-8ww8c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-multinode-849000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-multinode-849000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-qv64t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-multinode-849000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node multinode-849000 event: Registered Node multinode-849000 in Controller
	  Normal  NodeReady                21m                kubelet          Node multinode-849000 status is now: NodeReady
	
	
	Name:               multinode-849000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T11_40_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:40:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:41:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.196.236
	  Hostname:    multinode-849000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 30665cda6be840e19de2d42101ee89bb
	  System UUID:                ddf7b545-8cfa-674d-b55f-fd48f2f9d4f5
	  Boot ID:                    c8391cc6-6aee-4957-ada5-1a481b0a3745
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hjks    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-sn4kd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      82s
	  kube-system                 kube-proxy-wdskl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  NodeHasSufficientMemory  82s (x2 over 82s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x2 over 82s)  kubelet          Node multinode-849000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x2 over 82s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           79s                node-controller  Node multinode-849000-m03 event: Registered Node multinode-849000-m03 in Controller
	  Normal  NodeReady                58s                kubelet          Node multinode-849000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.061894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 9 18:18] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.172355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Jul 9 18:19] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.106297] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.542997] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.194600] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.225984] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +2.819794] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.174764] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.183052] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.284648] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[ +10.989764] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.110491] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.025456] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.572905] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.100801] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.070675] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.120083] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.551679] systemd-fstab-generator[2475]: Ignoring "noauto" option for root device
	[  +0.193907] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 9 18:20] kauditd_printk_skb: 51 callbacks suppressed
	[Jul 9 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0272c56037c7] <==
	{"level":"info","ts":"2024-07-09T18:19:37.819296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.819456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.820534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.206.134:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.82294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"88434b99d7bbd165","local-member-id":"e42eecf9634a170","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.8454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.845615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:29:37.886741Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-07-09T18:29:37.900514Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":687,"took":"13.301342ms","hash":2108544045,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2121728,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-09T18:29:37.900644Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2108544045,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-07-09T18:34:37.903933Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-09T18:34:37.912189Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"7.652225ms","hash":1821337612,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:34:37.912513Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1821337612,"revision":927,"compact-revision":687}
	{"level":"info","ts":"2024-07-09T18:35:57.287138Z","caller":"traceutil/trace.go:171","msg":"trace[1176997031] linearizableReadLoop","detail":"{readStateIndex:1442; appliedIndex:1441; }","duration":"158.59851ms","start":"2024-07-09T18:35:57.12852Z","end":"2024-07-09T18:35:57.287118Z","steps":["trace[1176997031] 'read index received'  (duration: 137.916144ms)","trace[1176997031] 'applied index is now lower than readState.Index'  (duration: 20.680866ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:35:57.287544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.000512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-4hjks\" ","response":"range_response_count:1 size:2221"}
	{"level":"info","ts":"2024-07-09T18:35:57.287811Z","caller":"traceutil/trace.go:171","msg":"trace[632773735] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-4hjks; range_end:; response_count:1; response_revision:1233; }","duration":"159.270012ms","start":"2024-07-09T18:35:57.128515Z","end":"2024-07-09T18:35:57.287785Z","steps":["trace[632773735] 'agreement among raft nodes before linearized reading'  (duration: 158.812611ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:37:35.826214Z","caller":"traceutil/trace.go:171","msg":"trace[478726099] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"158.19521ms","start":"2024-07-09T18:37:35.667982Z","end":"2024-07-09T18:37:35.826177Z","steps":["trace[478726099] 'process raft request'  (duration: 158.074409ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:39:37.921147Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1168}
	{"level":"info","ts":"2024-07-09T18:39:37.929404Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1168,"took":"7.948126ms","hash":3253994334,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:39:37.929571Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3253994334,"revision":1168,"compact-revision":927}
	{"level":"info","ts":"2024-07-09T18:40:13.451954Z","caller":"traceutil/trace.go:171","msg":"trace[1502299339] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"179.100678ms","start":"2024-07-09T18:40:13.272835Z","end":"2024-07-09T18:40:13.451935Z","steps":["trace[1502299339] 'process raft request'  (duration: 178.950978ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:14.005634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.253227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-07-09T18:40:14.005805Z","caller":"traceutil/trace.go:171","msg":"trace[2101599561] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1472; }","duration":"132.404128ms","start":"2024-07-09T18:40:13.873328Z","end":"2024-07-09T18:40:14.005732Z","steps":["trace[2101599561] 'range keys from in-memory index tree'  (duration: 131.983226ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:40:19.670021Z","caller":"traceutil/trace.go:171","msg":"trace[1040829640] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"173.817261ms","start":"2024-07-09T18:40:19.496184Z","end":"2024-07-09T18:40:19.670001Z","steps":["trace[1040829640] 'process raft request'  (duration: 173.61226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:21.061754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.020023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-09T18:40:21.061828Z","caller":"traceutil/trace.go:171","msg":"trace[42653553] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1481; }","duration":"193.165323ms","start":"2024-07-09T18:40:20.868649Z","end":"2024-07-09T18:40:21.061814Z","steps":["trace[42653553] 'range keys from in-memory index tree'  (duration: 192.928723ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:41:30 up 23 min,  0 users,  load average: 0.50, 0.46, 0.37
	Linux multinode-849000 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3de6fb5f7f7] <==
	I0709 18:40:27.669660       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:40:37.685364       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:40:37.685408       1 main.go:227] handling current node
	I0709 18:40:37.685421       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:40:37.685428       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:40:47.692917       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:40:47.693114       1 main.go:227] handling current node
	I0709 18:40:47.693130       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:40:47.693137       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:40:57.705847       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:40:57.705969       1 main.go:227] handling current node
	I0709 18:40:57.705985       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:40:57.705992       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:41:07.713029       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:41:07.713141       1 main.go:227] handling current node
	I0709 18:41:07.713319       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:41:07.713358       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:41:17.724228       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:41:17.724604       1 main.go:227] handling current node
	I0709 18:41:17.724722       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:41:17.724804       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:41:27.737673       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:41:27.737779       1 main.go:227] handling current node
	I0709 18:41:27.737793       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:41:27.737801       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [556077ae2825] <==
	I0709 18:19:39.638553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 18:19:39.698240       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 18:19:39.700011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 18:19:39.702635       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0709 18:19:39.714433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 18:19:40.505081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0709 18:19:40.517142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0709 18:19:40.517305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 18:19:41.636583       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 18:19:41.706223       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 18:19:41.808149       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0709 18:19:41.821195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.206.134]
	I0709 18:19:41.822637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0709 18:19:41.843642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 18:19:42.609385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 18:19:42.805564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 18:19:42.871569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 18:19:42.907682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 18:19:57.333598       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0709 18:19:57.543081       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0709 18:35:55.870544       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53940: use of closed network connection
	E0709 18:35:56.795209       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53945: use of closed network connection
	E0709 18:35:57.698486       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53950: use of closed network connection
	E0709 18:36:33.178526       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53970: use of closed network connection
	E0709 18:36:43.597768       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53972: use of closed network connection
	
	
	==> kube-controller-manager [a89ee753e775] <==
	I0709 18:19:57.743180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="172.458844ms"
	I0709 18:19:57.765649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.805292ms"
	I0709 18:19:57.815368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.660854ms"
	I0709 18:19:57.815916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.6µs"
	I0709 18:19:58.007755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.828816ms"
	I0709 18:19:58.026709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.106923ms"
	I0709 18:19:58.029403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.1µs"
	I0709 18:20:07.977654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.049991ms"
	I0709 18:20:08.015594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111µs"
	I0709 18:20:09.991729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.353168ms"
	I0709 18:20:10.001112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="868.106µs"
	I0709 18:20:11.554561       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0709 18:24:17.420348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.233775ms"
	I0709 18:24:17.441694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.911551ms"
	I0709 18:24:17.444364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.629006ms"
	I0709 18:24:20.165672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.094324ms"
	I0709 18:24:20.166173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0709 18:40:08.595141       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-849000-m03\" does not exist"
	I0709 18:40:08.641712       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-849000-m03" podCIDRs=["10.244.1.0/24"]
	I0709 18:40:11.793433       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-849000-m03"
	I0709 18:40:32.591516       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-849000-m03"
	I0709 18:40:32.616362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="263.401µs"
	I0709 18:40:32.638542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.1µs"
	I0709 18:40:35.404984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.084842ms"
	I0709 18:40:35.405359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.3µs"
	
	
	==> kube-proxy [02ab9d172768] <==
	I0709 18:19:58.913720       1 server_linux.go:69] "Using iptables proxy"
	I0709 18:19:58.935439       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.134"]
	I0709 18:19:59.002175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 18:19:59.002345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 18:19:59.002422       1 server_linux.go:165] "Using iptables Proxier"
	I0709 18:19:59.006984       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 18:19:59.008394       1 server.go:872] "Version info" version="v1.30.2"
	I0709 18:19:59.008567       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 18:19:59.012208       1 config.go:192] "Starting service config controller"
	I0709 18:19:59.012230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 18:19:59.012257       1 config.go:101] "Starting endpoint slice config controller"
	I0709 18:19:59.012263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 18:19:59.014777       1 config.go:319] "Starting node config controller"
	I0709 18:19:59.015900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 18:19:59.113145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 18:19:59.113150       1 shared_informer.go:320] Caches are synced for service config
	I0709 18:19:59.116402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8661e349d48a] <==
	W0709 18:19:40.760717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 18:19:40.760830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 18:19:40.849864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 18:19:40.850245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 18:19:40.865437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.865496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.872200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 18:19:40.872364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 18:19:40.917325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.917365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.931008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.931093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.976206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 18:19:40.976434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 18:19:41.005485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 18:19:41.005666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 18:19:41.019785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 18:19:41.020146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 18:19:41.110495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 18:19:41.110614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 18:19:41.120707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 18:19:41.122629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 18:19:41.253897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 18:19:41.254338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 18:19:43.553553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 18:36:42 multinode-849000 kubelet[2293]: E0709 18:36:42.972406    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:36:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:37:42 multinode-849000 kubelet[2293]: E0709 18:37:42.971180    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:37:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:37:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:37:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:37:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:38:42 multinode-849000 kubelet[2293]: E0709 18:38:42.972834    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:39:42 multinode-849000 kubelet[2293]: E0709 18:39:42.974504    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:40:42 multinode-849000 kubelet[2293]: E0709 18:40:42.973444    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:41:22.470714    7820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000: (12.1592879s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-849000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (266.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (69.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status --output json --alsologtostderr: exit status 2 (35.868601s)

                                                
                                                
-- stdout --
	[{"Name":"multinode-849000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-849000-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-849000-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:41:54.131360    6368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 11:41:54.141138    6368 out.go:291] Setting OutFile to fd 1032 ...
	I0709 11:41:54.141902    6368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:41:54.141979    6368 out.go:304] Setting ErrFile to fd 1448...
	I0709 11:41:54.142033    6368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:41:54.158802    6368 out.go:298] Setting JSON to true
	I0709 11:41:54.158802    6368 mustload.go:65] Loading cluster: multinode-849000
	I0709 11:41:54.158802    6368 notify.go:220] Checking for updates...
	I0709 11:41:54.159862    6368 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:41:54.159978    6368 status.go:255] checking status of multinode-849000 ...
	I0709 11:41:54.160596    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:41:56.305247    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:41:56.305298    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:41:56.305430    6368 status.go:330] multinode-849000 host status = "Running" (err=<nil>)
	I0709 11:41:56.305430    6368 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:41:56.306697    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:41:58.467408    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:41:58.467408    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:41:58.468314    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:42:01.040499    6368 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:42:01.040499    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:01.040620    6368 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:42:01.053522    6368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:42:01.053522    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:42:03.225111    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:42:03.226039    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:03.226039    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:42:05.813415    6368 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:42:05.813415    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:05.813990    6368 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:42:05.916938    6368 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8633262s)
	I0709 11:42:05.928886    6368 ssh_runner.go:195] Run: systemctl --version
	I0709 11:42:05.953684    6368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:42:05.985733    6368 kubeconfig.go:125] found "multinode-849000" server: "https://172.18.206.134:8443"
	I0709 11:42:05.985815    6368 api_server.go:166] Checking apiserver status ...
	I0709 11:42:06.000322    6368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:42:06.039106    6368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2115/cgroup
	W0709 11:42:06.061950    6368 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 11:42:06.078447    6368 ssh_runner.go:195] Run: ls
	I0709 11:42:06.091495    6368 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:42:06.098107    6368 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:42:06.098107    6368 status.go:422] multinode-849000 apiserver status = Running (err=<nil>)
	I0709 11:42:06.098107    6368 status.go:257] multinode-849000 status: &{Name:multinode-849000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 11:42:06.098995    6368 status.go:255] checking status of multinode-849000-m02 ...
	I0709 11:42:06.100104    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:42:08.263861    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:42:08.264786    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:08.264786    6368 status.go:330] multinode-849000-m02 host status = "Running" (err=<nil>)
	I0709 11:42:08.264862    6368 host.go:66] Checking if "multinode-849000-m02" exists ...
	I0709 11:42:08.265645    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:42:10.458356    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:42:10.458776    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:10.458958    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:42:13.071135    6368 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:42:13.071759    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:13.071759    6368 host.go:66] Checking if "multinode-849000-m02" exists ...
	I0709 11:42:13.084482    6368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:42:13.084482    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:42:15.239749    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:42:15.239749    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:15.240208    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:42:17.803021    6368 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:42:17.803021    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:17.803652    6368 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:42:17.910627    6368 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8261282s)
	I0709 11:42:17.922207    6368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:42:17.946940    6368 status.go:257] multinode-849000-m02 status: &{Name:multinode-849000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0709 11:42:17.946940    6368 status.go:255] checking status of multinode-849000-m03 ...
	I0709 11:42:17.947769    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:42:20.114974    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:42:20.115939    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:20.115939    6368 status.go:330] multinode-849000-m03 host status = "Running" (err=<nil>)
	I0709 11:42:20.116019    6368 host.go:66] Checking if "multinode-849000-m03" exists ...
	I0709 11:42:20.116867    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:42:22.333514    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:42:22.334320    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:22.334320    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:42:24.923995    6368 main.go:141] libmachine: [stdout =====>] : 172.18.196.236
	
	I0709 11:42:24.924323    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:24.924492    6368 host.go:66] Checking if "multinode-849000-m03" exists ...
	I0709 11:42:24.937203    6368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:42:24.937203    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:42:27.139770    6368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:42:27.139770    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:27.139770    6368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:42:29.715271    6368 main.go:141] libmachine: [stdout =====>] : 172.18.196.236
	
	I0709 11:42:29.715838    6368 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:42:29.715995    6368 sshutil.go:53] new ssh client: &{IP:172.18.196.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m03\id_rsa Username:docker}
	I0709 11:42:29.822282    6368 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8850617s)
	I0709 11:42:29.834493    6368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:42:29.859976    6368 status.go:257] multinode-849000-m03 status: &{Name:multinode-849000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-849000 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000: (12.0849486s)
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25: (8.3819023s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-849000                               | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:16 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- apply -f                   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT | 09 Jul 24 11:24 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- rollout                    | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-4hjks                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | busybox-fc5497c4f-f2j8m                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-f2j8m -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-849000 -v 3                      | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:37 PDT | 09 Jul 24 11:40 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 11:16:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 
	
	
	==> Docker <==
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597835991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597891091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597983991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968452735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968474235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968801835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.141801596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.142933705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.143853812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.144140014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904534514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904809014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904875715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904980715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:18 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/216d18e70c2fb87f116d16247afca62184ce070d4aca7bbce19d833808db917c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 18:24:19 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285320124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285707025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285773326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285917526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7a0fcb9e869e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   216d18e70c2fb       busybox-fc5497c4f-f2j8m
	c150592e658c3       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   0                   2a33ce3348449       coredns-7db6d8ff4d-lzsvc
	37c7b8e14dc9c       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   06d8c6b21616c       storage-provisioner
	f3de6fb5f7f77       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              22 minutes ago      Running             kindnet-cni               0                   668c809456776       kindnet-8ww8c
	02ab9d1727686       53c535741fb44                                                                                         22 minutes ago      Running             kube-proxy                0                   0a60f24294838       kube-proxy-qv64t
	0272c56037c7d       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   2c574be2cc6d3       etcd-multinode-849000
	8661e349d48ab       7820c83aa1394                                                                                         23 minutes ago      Running             kube-scheduler            0                   b9412aa955ab7       kube-scheduler-multinode-849000
	a89ee753e7759       e874818b3caac                                                                                         23 minutes ago      Running             kube-controller-manager   0                   a610e3d24fa06       kube-controller-manager-multinode-849000
	556077ae2825d       56ce0fd9fb532                                                                                         23 minutes ago      Running             kube-apiserver            0                   2035bb8593f0e       kube-apiserver-multinode-849000
	
	
	==> coredns [c150592e658c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54651 - 41351 "HINFO IN 6752767091270397564.1917026836058955763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104932825s
	[INFO] 10.244.0.3:37665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218301s
	[INFO] 10.244.0.3:33292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.095768808s
	[INFO] 10.244.0.3:51028 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033779908s
	[INFO] 10.244.0.3:52198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.254317433s
	[INFO] 10.244.0.3:58685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001442s
	[INFO] 10.244.0.3:50205 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.085049073s
	[INFO] 10.244.0.3:41462 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002117s
	[INFO] 10.244.0.3:46161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002965s
	[INFO] 10.244.0.3:40010 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038270523s
	[INFO] 10.244.0.3:50213 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181901s
	[INFO] 10.244.0.3:40333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208801s
	[INFO] 10.244.0.3:33479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001618s
	[INFO] 10.244.0.3:44590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223001s
	[INFO] 10.244.0.3:58378 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001694s
	[INFO] 10.244.0.3:35676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.0.3:50088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	[INFO] 10.244.0.3:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000289801s
	[INFO] 10.244.0.3:33623 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197201s
	[INFO] 10.244.0.3:60126 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001055s
	[INFO] 10.244.0.3:44284 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150901s
	
	
	==> describe nodes <==
	Name:               multinode-849000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:42:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.134
	  Hostname:    multinode-849000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af90c209c8a84d288c2d79663fa33a94
	  System UUID:                69e31ac5-0527-9e4a-81b6-556c6bac7963
	  Boot ID:                    5c1387e9-724e-4a1c-a3cc-dde77e8449e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2j8m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-lzsvc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-multinode-849000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-8ww8c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-multinode-849000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-multinode-849000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-qv64t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-multinode-849000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node multinode-849000 event: Registered Node multinode-849000 in Controller
	  Normal  NodeReady                22m                kubelet          Node multinode-849000 status is now: NodeReady
	
	
	Name:               multinode-849000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T11_40_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:40:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:42:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:40:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.196.236
	  Hostname:    multinode-849000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 30665cda6be840e19de2d42101ee89bb
	  System UUID:                ddf7b545-8cfa-674d-b55f-fd48f2f9d4f5
	  Boot ID:                    c8391cc6-6aee-4957-ada5-1a481b0a3745
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hjks    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-sn4kd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m41s
	  kube-system                 kube-proxy-wdskl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m41s (x2 over 2m41s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s (x2 over 2m41s)  kubelet          Node multinode-849000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x2 over 2m41s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m38s                  node-controller  Node multinode-849000-m03 event: Registered Node multinode-849000-m03 in Controller
	  Normal  NodeReady                2m17s                  kubelet          Node multinode-849000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.061894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 9 18:18] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.172355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Jul 9 18:19] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.106297] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.542997] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.194600] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.225984] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +2.819794] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.174764] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.183052] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.284648] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[ +10.989764] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.110491] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.025456] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.572905] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.100801] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.070675] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.120083] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.551679] systemd-fstab-generator[2475]: Ignoring "noauto" option for root device
	[  +0.193907] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 9 18:20] kauditd_printk_skb: 51 callbacks suppressed
	[Jul 9 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0272c56037c7] <==
	{"level":"info","ts":"2024-07-09T18:19:37.819296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.819456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-09T18:19:37.820534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.206.134:2379"}
	{"level":"info","ts":"2024-07-09T18:19:37.82294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"88434b99d7bbd165","local-member-id":"e42eecf9634a170","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.8454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:19:37.845615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-09T18:29:37.886741Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-07-09T18:29:37.900514Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":687,"took":"13.301342ms","hash":2108544045,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2121728,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-09T18:29:37.900644Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2108544045,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-07-09T18:34:37.903933Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-09T18:34:37.912189Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"7.652225ms","hash":1821337612,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:34:37.912513Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1821337612,"revision":927,"compact-revision":687}
	{"level":"info","ts":"2024-07-09T18:35:57.287138Z","caller":"traceutil/trace.go:171","msg":"trace[1176997031] linearizableReadLoop","detail":"{readStateIndex:1442; appliedIndex:1441; }","duration":"158.59851ms","start":"2024-07-09T18:35:57.12852Z","end":"2024-07-09T18:35:57.287118Z","steps":["trace[1176997031] 'read index received'  (duration: 137.916144ms)","trace[1176997031] 'applied index is now lower than readState.Index'  (duration: 20.680866ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:35:57.287544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.000512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-4hjks\" ","response":"range_response_count:1 size:2221"}
	{"level":"info","ts":"2024-07-09T18:35:57.287811Z","caller":"traceutil/trace.go:171","msg":"trace[632773735] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-4hjks; range_end:; response_count:1; response_revision:1233; }","duration":"159.270012ms","start":"2024-07-09T18:35:57.128515Z","end":"2024-07-09T18:35:57.287785Z","steps":["trace[632773735] 'agreement among raft nodes before linearized reading'  (duration: 158.812611ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:37:35.826214Z","caller":"traceutil/trace.go:171","msg":"trace[478726099] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"158.19521ms","start":"2024-07-09T18:37:35.667982Z","end":"2024-07-09T18:37:35.826177Z","steps":["trace[478726099] 'process raft request'  (duration: 158.074409ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:39:37.921147Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1168}
	{"level":"info","ts":"2024-07-09T18:39:37.929404Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1168,"took":"7.948126ms","hash":3253994334,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:39:37.929571Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3253994334,"revision":1168,"compact-revision":927}
	{"level":"info","ts":"2024-07-09T18:40:13.451954Z","caller":"traceutil/trace.go:171","msg":"trace[1502299339] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"179.100678ms","start":"2024-07-09T18:40:13.272835Z","end":"2024-07-09T18:40:13.451935Z","steps":["trace[1502299339] 'process raft request'  (duration: 178.950978ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:14.005634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.253227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-07-09T18:40:14.005805Z","caller":"traceutil/trace.go:171","msg":"trace[2101599561] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1472; }","duration":"132.404128ms","start":"2024-07-09T18:40:13.873328Z","end":"2024-07-09T18:40:14.005732Z","steps":["trace[2101599561] 'range keys from in-memory index tree'  (duration: 131.983226ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:40:19.670021Z","caller":"traceutil/trace.go:171","msg":"trace[1040829640] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"173.817261ms","start":"2024-07-09T18:40:19.496184Z","end":"2024-07-09T18:40:19.670001Z","steps":["trace[1040829640] 'process raft request'  (duration: 173.61226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:21.061754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.020023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-09T18:40:21.061828Z","caller":"traceutil/trace.go:171","msg":"trace[42653553] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1481; }","duration":"193.165323ms","start":"2024-07-09T18:40:20.868649Z","end":"2024-07-09T18:40:21.061814Z","steps":["trace[42653553] 'range keys from in-memory index tree'  (duration: 192.928723ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:42:50 up 25 min,  0 users,  load average: 0.45, 0.48, 0.39
	Linux multinode-849000 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3de6fb5f7f7] <==
	I0709 18:41:47.763666       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:41:57.778110       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:41:57.778251       1 main.go:227] handling current node
	I0709 18:41:57.778394       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:41:57.778406       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:42:07.792056       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:42:07.792260       1 main.go:227] handling current node
	I0709 18:42:07.792341       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:42:07.792364       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:42:17.802955       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:42:17.803001       1 main.go:227] handling current node
	I0709 18:42:17.803013       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:42:17.803020       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:42:27.816222       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:42:27.816384       1 main.go:227] handling current node
	I0709 18:42:27.816400       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:42:27.816407       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:42:37.826818       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:42:37.826927       1 main.go:227] handling current node
	I0709 18:42:37.826940       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:42:37.826947       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:42:47.842691       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:42:47.842803       1 main.go:227] handling current node
	I0709 18:42:47.842819       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:42:47.842827       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [556077ae2825] <==
	I0709 18:19:39.638553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 18:19:39.698240       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 18:19:39.700011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 18:19:39.702635       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0709 18:19:39.714433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 18:19:40.505081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0709 18:19:40.517142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0709 18:19:40.517305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 18:19:41.636583       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 18:19:41.706223       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 18:19:41.808149       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0709 18:19:41.821195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.206.134]
	I0709 18:19:41.822637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0709 18:19:41.843642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 18:19:42.609385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 18:19:42.805564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 18:19:42.871569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 18:19:42.907682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 18:19:57.333598       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0709 18:19:57.543081       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0709 18:35:55.870544       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53940: use of closed network connection
	E0709 18:35:56.795209       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53945: use of closed network connection
	E0709 18:35:57.698486       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53950: use of closed network connection
	E0709 18:36:33.178526       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53970: use of closed network connection
	E0709 18:36:43.597768       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53972: use of closed network connection
	
	
	==> kube-controller-manager [a89ee753e775] <==
	I0709 18:19:57.743180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="172.458844ms"
	I0709 18:19:57.765649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.805292ms"
	I0709 18:19:57.815368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.660854ms"
	I0709 18:19:57.815916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.6µs"
	I0709 18:19:58.007755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.828816ms"
	I0709 18:19:58.026709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.106923ms"
	I0709 18:19:58.029403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.1µs"
	I0709 18:20:07.977654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.049991ms"
	I0709 18:20:08.015594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111µs"
	I0709 18:20:09.991729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.353168ms"
	I0709 18:20:10.001112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="868.106µs"
	I0709 18:20:11.554561       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0709 18:24:17.420348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.233775ms"
	I0709 18:24:17.441694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.911551ms"
	I0709 18:24:17.444364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.629006ms"
	I0709 18:24:20.165672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.094324ms"
	I0709 18:24:20.166173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0709 18:40:08.595141       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-849000-m03\" does not exist"
	I0709 18:40:08.641712       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-849000-m03" podCIDRs=["10.244.1.0/24"]
	I0709 18:40:11.793433       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-849000-m03"
	I0709 18:40:32.591516       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-849000-m03"
	I0709 18:40:32.616362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="263.401µs"
	I0709 18:40:32.638542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.1µs"
	I0709 18:40:35.404984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.084842ms"
	I0709 18:40:35.405359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.3µs"
	
	
	==> kube-proxy [02ab9d172768] <==
	I0709 18:19:58.913720       1 server_linux.go:69] "Using iptables proxy"
	I0709 18:19:58.935439       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.134"]
	I0709 18:19:59.002175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 18:19:59.002345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 18:19:59.002422       1 server_linux.go:165] "Using iptables Proxier"
	I0709 18:19:59.006984       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 18:19:59.008394       1 server.go:872] "Version info" version="v1.30.2"
	I0709 18:19:59.008567       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 18:19:59.012208       1 config.go:192] "Starting service config controller"
	I0709 18:19:59.012230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 18:19:59.012257       1 config.go:101] "Starting endpoint slice config controller"
	I0709 18:19:59.012263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 18:19:59.014777       1 config.go:319] "Starting node config controller"
	I0709 18:19:59.015900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 18:19:59.113145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 18:19:59.113150       1 shared_informer.go:320] Caches are synced for service config
	I0709 18:19:59.116402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8661e349d48a] <==
	W0709 18:19:40.760717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 18:19:40.760830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 18:19:40.849864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 18:19:40.850245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 18:19:40.865437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.865496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.872200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 18:19:40.872364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 18:19:40.917325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.917365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.931008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.931093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.976206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 18:19:40.976434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 18:19:41.005485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 18:19:41.005666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 18:19:41.019785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 18:19:41.020146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 18:19:41.110495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 18:19:41.110614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 18:19:41.120707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 18:19:41.122629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 18:19:41.253897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 18:19:41.254338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 18:19:43.553553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 18:38:42 multinode-849000 kubelet[2293]: E0709 18:38:42.972834    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:38:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:39:42 multinode-849000 kubelet[2293]: E0709 18:39:42.974504    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:39:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:40:42 multinode-849000 kubelet[2293]: E0709 18:40:42.973444    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:41:42 multinode-849000 kubelet[2293]: E0709 18:41:42.971444    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:42:42 multinode-849000 kubelet[2293]: E0709 18:42:42.972527    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:42:42.085643    2464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000: (12.0772706s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-849000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (69.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (120.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 node stop m03
E0709 11:43:04.145235   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 node stop m03: (34.5969502s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status: exit status 7 (26.2741888s)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-849000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-849000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:43:38.480131    9100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status --alsologtostderr: exit status 7 (26.321753s)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-849000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-849000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:44:04.748353    6168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 11:44:04.755785    6168 out.go:291] Setting OutFile to fd 688 ...
	I0709 11:44:04.756850    6168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:44:04.756850    6168 out.go:304] Setting ErrFile to fd 424...
	I0709 11:44:04.756850    6168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:44:04.771979    6168 out.go:298] Setting JSON to false
	I0709 11:44:04.772896    6168 mustload.go:65] Loading cluster: multinode-849000
	I0709 11:44:04.772896    6168 notify.go:220] Checking for updates...
	I0709 11:44:04.773739    6168 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:44:04.773739    6168 status.go:255] checking status of multinode-849000 ...
	I0709 11:44:04.774917    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:44:06.978097    6168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:44:06.978244    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:06.978244    6168 status.go:330] multinode-849000 host status = "Running" (err=<nil>)
	I0709 11:44:06.978244    6168 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:44:06.979113    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:44:09.179182    6168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:44:09.179463    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:09.179544    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:44:11.818535    6168 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:44:11.818535    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:11.818535    6168 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:44:11.832953    6168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:44:11.832953    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:44:13.949828    6168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:44:13.949828    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:13.949894    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:44:16.552588    6168 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:44:16.552667    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:16.552667    6168 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:44:16.652489    6168 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8195186s)
	I0709 11:44:16.665742    6168 ssh_runner.go:195] Run: systemctl --version
	I0709 11:44:16.688950    6168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:44:16.715441    6168 kubeconfig.go:125] found "multinode-849000" server: "https://172.18.206.134:8443"
	I0709 11:44:16.715441    6168 api_server.go:166] Checking apiserver status ...
	I0709 11:44:16.727611    6168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:44:16.770774    6168 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2115/cgroup
	W0709 11:44:16.792377    6168 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0709 11:44:16.803051    6168 ssh_runner.go:195] Run: ls
	I0709 11:44:16.818674    6168 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:44:16.826090    6168 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:44:16.826090    6168 status.go:422] multinode-849000 apiserver status = Running (err=<nil>)
	I0709 11:44:16.826775    6168 status.go:257] multinode-849000 status: &{Name:multinode-849000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0709 11:44:16.826775    6168 status.go:255] checking status of multinode-849000-m02 ...
	I0709 11:44:16.827654    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:44:19.043101    6168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:44:19.043101    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:19.043386    6168 status.go:330] multinode-849000-m02 host status = "Running" (err=<nil>)
	I0709 11:44:19.043386    6168 host.go:66] Checking if "multinode-849000-m02" exists ...
	I0709 11:44:19.044387    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:44:21.283358    6168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:44:21.283992    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:21.284067    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:44:23.901503    6168 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:44:23.901564    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:23.901564    6168 host.go:66] Checking if "multinode-849000-m02" exists ...
	I0709 11:44:23.913581    6168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0709 11:44:23.913581    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:44:26.044706    6168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:44:26.044706    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:26.044706    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:44:28.630035    6168 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:44:28.631025    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:28.631161    6168 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:44:28.724439    6168 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8108402s)
	I0709 11:44:28.736845    6168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:44:28.761880    6168 status.go:257] multinode-849000-m02 status: &{Name:multinode-849000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0709 11:44:28.761880    6168 status.go:255] checking status of multinode-849000-m03 ...
	I0709 11:44:28.762876    6168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:44:30.940190    6168 main.go:141] libmachine: [stdout =====>] : Off
	
	I0709 11:44:30.949355    6168 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:44:30.949355    6168 status.go:330] multinode-849000-m03 host status = "Stopped" (err=<nil>)
	I0709 11:44:30.949355    6168 status.go:343] host is not running, skipping remaining checks
	I0709 11:44:30.949355    6168 status.go:257] multinode-849000-m03 status: &{Name:multinode-849000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-windows-amd64.exe -p multinode-849000 status --alsologtostderr": multinode-849000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-849000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-849000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-windows-amd64.exe -p multinode-849000 status --alsologtostderr": multinode-849000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-849000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-849000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000: (11.7477715s)
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25: (8.3034364s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-849000 -- apply -f                   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT | 09 Jul 24 11:24 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- rollout                    | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT |                     |
	|         | status deployment/busybox                         |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --                        |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m -- nslookup               |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o                | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-4hjks                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | busybox-fc5497c4f-f2j8m                           |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec                       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-f2j8m -- sh                     |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1                         |                  |                   |         |                     |                     |
	| node    | add -p multinode-849000 -v 3                      | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:37 PDT | 09 Jul 24 11:40 PDT |
	|         | --alsologtostderr                                 |                  |                   |         |                     |                     |
	| node    | multinode-849000 node stop m03                    | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:43 PDT | 09 Jul 24 11:43 PDT |
	|---------|---------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 11:16:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 
	
	
	==> Docker <==
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597835991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597891091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597983991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968452735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968474235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968801835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.141801596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.142933705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.143853812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.144140014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904534514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904809014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904875715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904980715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:18 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/216d18e70c2fb87f116d16247afca62184ce070d4aca7bbce19d833808db917c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 18:24:19 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285320124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285707025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285773326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285917526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7a0fcb9e869e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   216d18e70c2fb       busybox-fc5497c4f-f2j8m
	c150592e658c3       cbb01a7bd410d                                                                                         24 minutes ago      Running             coredns                   0                   2a33ce3348449       coredns-7db6d8ff4d-lzsvc
	37c7b8e14dc9c       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       0                   06d8c6b21616c       storage-provisioner
	f3de6fb5f7f77       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago      Running             kindnet-cni               0                   668c809456776       kindnet-8ww8c
	02ab9d1727686       53c535741fb44                                                                                         24 minutes ago      Running             kube-proxy                0                   0a60f24294838       kube-proxy-qv64t
	0272c56037c7d       3861cfcd7c04c                                                                                         25 minutes ago      Running             etcd                      0                   2c574be2cc6d3       etcd-multinode-849000
	8661e349d48ab       7820c83aa1394                                                                                         25 minutes ago      Running             kube-scheduler            0                   b9412aa955ab7       kube-scheduler-multinode-849000
	a89ee753e7759       e874818b3caac                                                                                         25 minutes ago      Running             kube-controller-manager   0                   a610e3d24fa06       kube-controller-manager-multinode-849000
	556077ae2825d       56ce0fd9fb532                                                                                         25 minutes ago      Running             kube-apiserver            0                   2035bb8593f0e       kube-apiserver-multinode-849000
	
	
	==> coredns [c150592e658c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54651 - 41351 "HINFO IN 6752767091270397564.1917026836058955763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104932825s
	[INFO] 10.244.0.3:37665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218301s
	[INFO] 10.244.0.3:33292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.095768808s
	[INFO] 10.244.0.3:51028 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033779908s
	[INFO] 10.244.0.3:52198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.254317433s
	[INFO] 10.244.0.3:58685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001442s
	[INFO] 10.244.0.3:50205 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.085049073s
	[INFO] 10.244.0.3:41462 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002117s
	[INFO] 10.244.0.3:46161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002965s
	[INFO] 10.244.0.3:40010 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038270523s
	[INFO] 10.244.0.3:50213 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181901s
	[INFO] 10.244.0.3:40333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208801s
	[INFO] 10.244.0.3:33479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001618s
	[INFO] 10.244.0.3:44590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223001s
	[INFO] 10.244.0.3:58378 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001694s
	[INFO] 10.244.0.3:35676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.0.3:50088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	[INFO] 10.244.0.3:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000289801s
	[INFO] 10.244.0.3:33623 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197201s
	[INFO] 10.244.0.3:60126 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001055s
	[INFO] 10.244.0.3:44284 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150901s
	
	
	==> describe nodes <==
	Name:               multinode-849000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:44:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:40:05 +0000   Tue, 09 Jul 2024 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.134
	  Hostname:    multinode-849000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af90c209c8a84d288c2d79663fa33a94
	  System UUID:                69e31ac5-0527-9e4a-81b6-556c6bac7963
	  Boot ID:                    5c1387e9-724e-4a1c-a3cc-dde77e8449e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2j8m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-lzsvc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-multinode-849000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-8ww8c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-multinode-849000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-multinode-849000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-qv64t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-multinode-849000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node multinode-849000 event: Registered Node multinode-849000 in Controller
	  Normal  NodeReady                24m                kubelet          Node multinode-849000 status is now: NodeReady
	
	
	Name:               multinode-849000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T11_40_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:40:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:43:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.18.196.236
	  Hostname:    multinode-849000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 30665cda6be840e19de2d42101ee89bb
	  System UUID:                ddf7b545-8cfa-674d-b55f-fd48f2f9d4f5
	  Boot ID:                    c8391cc6-6aee-4957-ada5-1a481b0a3745
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hjks    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-sn4kd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m42s
	  kube-system                 kube-proxy-wdskl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m42s (x2 over 4m42s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x2 over 4m42s)  kubelet          Node multinode-849000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x2 over 4m42s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m39s                  node-controller  Node multinode-849000-m03 event: Registered Node multinode-849000-m03 in Controller
	  Normal  NodeReady                4m18s                  kubelet          Node multinode-849000-m03 status is now: NodeReady
	  Normal  NodeNotReady             54s                    node-controller  Node multinode-849000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.061894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 9 18:18] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.172355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Jul 9 18:19] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.106297] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.542997] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.194600] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.225984] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +2.819794] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.174764] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.183052] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.284648] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[ +10.989764] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.110491] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.025456] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.572905] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.100801] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.070675] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.120083] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.551679] systemd-fstab-generator[2475]: Ignoring "noauto" option for root device
	[  +0.193907] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 9 18:20] kauditd_printk_skb: 51 callbacks suppressed
	[Jul 9 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0272c56037c7] <==
	{"level":"info","ts":"2024-07-09T18:29:37.900644Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2108544045,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-07-09T18:34:37.903933Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-09T18:34:37.912189Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"7.652225ms","hash":1821337612,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:34:37.912513Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1821337612,"revision":927,"compact-revision":687}
	{"level":"info","ts":"2024-07-09T18:35:57.287138Z","caller":"traceutil/trace.go:171","msg":"trace[1176997031] linearizableReadLoop","detail":"{readStateIndex:1442; appliedIndex:1441; }","duration":"158.59851ms","start":"2024-07-09T18:35:57.12852Z","end":"2024-07-09T18:35:57.287118Z","steps":["trace[1176997031] 'read index received'  (duration: 137.916144ms)","trace[1176997031] 'applied index is now lower than readState.Index'  (duration: 20.680866ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:35:57.287544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.000512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-4hjks\" ","response":"range_response_count:1 size:2221"}
	{"level":"info","ts":"2024-07-09T18:35:57.287811Z","caller":"traceutil/trace.go:171","msg":"trace[632773735] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-4hjks; range_end:; response_count:1; response_revision:1233; }","duration":"159.270012ms","start":"2024-07-09T18:35:57.128515Z","end":"2024-07-09T18:35:57.287785Z","steps":["trace[632773735] 'agreement among raft nodes before linearized reading'  (duration: 158.812611ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:37:35.826214Z","caller":"traceutil/trace.go:171","msg":"trace[478726099] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"158.19521ms","start":"2024-07-09T18:37:35.667982Z","end":"2024-07-09T18:37:35.826177Z","steps":["trace[478726099] 'process raft request'  (duration: 158.074409ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:39:37.921147Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1168}
	{"level":"info","ts":"2024-07-09T18:39:37.929404Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1168,"took":"7.948126ms","hash":3253994334,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:39:37.929571Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3253994334,"revision":1168,"compact-revision":927}
	{"level":"info","ts":"2024-07-09T18:40:13.451954Z","caller":"traceutil/trace.go:171","msg":"trace[1502299339] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"179.100678ms","start":"2024-07-09T18:40:13.272835Z","end":"2024-07-09T18:40:13.451935Z","steps":["trace[1502299339] 'process raft request'  (duration: 178.950978ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:14.005634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.253227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-07-09T18:40:14.005805Z","caller":"traceutil/trace.go:171","msg":"trace[2101599561] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1472; }","duration":"132.404128ms","start":"2024-07-09T18:40:13.873328Z","end":"2024-07-09T18:40:14.005732Z","steps":["trace[2101599561] 'range keys from in-memory index tree'  (duration: 131.983226ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:40:19.670021Z","caller":"traceutil/trace.go:171","msg":"trace[1040829640] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"173.817261ms","start":"2024-07-09T18:40:19.496184Z","end":"2024-07-09T18:40:19.670001Z","steps":["trace[1040829640] 'process raft request'  (duration: 173.61226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:21.061754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.020023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-09T18:40:21.061828Z","caller":"traceutil/trace.go:171","msg":"trace[42653553] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1481; }","duration":"193.165323ms","start":"2024-07-09T18:40:20.868649Z","end":"2024-07-09T18:40:21.061814Z","steps":["trace[42653553] 'range keys from in-memory index tree'  (duration: 192.928723ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:43:35.498409Z","caller":"traceutil/trace.go:171","msg":"trace[659964785] transaction","detail":"{read_only:false; response_revision:1679; number_of_response:1; }","duration":"247.171591ms","start":"2024-07-09T18:43:35.251216Z","end":"2024-07-09T18:43:35.498388Z","steps":["trace[659964785] 'process raft request'  (duration: 246.984191ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:43:37.157261Z","caller":"traceutil/trace.go:171","msg":"trace[831135192] transaction","detail":"{read_only:false; response_revision:1680; number_of_response:1; }","duration":"116.848632ms","start":"2024-07-09T18:43:37.040393Z","end":"2024-07-09T18:43:37.157241Z","steps":["trace[831135192] 'process raft request'  (duration: 116.710932ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:43:37.461958Z","caller":"traceutil/trace.go:171","msg":"trace[390708889] linearizableReadLoop","detail":"{readStateIndex:1985; appliedIndex:1984; }","duration":"105.267809ms","start":"2024-07-09T18:43:37.356664Z","end":"2024-07-09T18:43:37.461932Z","steps":["trace[390708889] 'read index received'  (duration: 51.503702ms)","trace[390708889] 'applied index is now lower than readState.Index'  (duration: 53.762307ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:43:37.462236Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.542211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-09T18:43:37.462318Z","caller":"traceutil/trace.go:171","msg":"trace[1756853946] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1680; }","duration":"105.627011ms","start":"2024-07-09T18:43:37.356635Z","end":"2024-07-09T18:43:37.462262Z","steps":["trace[1756853946] 'agreement among raft nodes before linearized reading'  (duration: 105.37421ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:44:37.954071Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1408}
	{"level":"info","ts":"2024-07-09T18:44:37.962594Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1408,"took":"7.639517ms","hash":1552300792,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1773568,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-09T18:44:37.962695Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1552300792,"revision":1408,"compact-revision":1168}
	
	
	==> kernel <==
	 18:44:50 up 27 min,  0 users,  load average: 0.18, 0.41, 0.37
	Linux multinode-849000 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3de6fb5f7f7] <==
	I0709 18:43:47.937523       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:43:57.944845       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:43:57.944980       1 main.go:227] handling current node
	I0709 18:43:57.944997       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:43:57.945004       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:44:07.952801       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:44:07.952901       1 main.go:227] handling current node
	I0709 18:44:07.952915       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:44:07.952922       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:44:17.959996       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:44:17.960099       1 main.go:227] handling current node
	I0709 18:44:17.960126       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:44:17.960133       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:44:27.974386       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:44:27.974414       1 main.go:227] handling current node
	I0709 18:44:27.974425       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:44:27.974430       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:44:37.989987       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:44:37.990031       1 main.go:227] handling current node
	I0709 18:44:37.990042       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:44:37.990048       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:44:47.997967       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:44:47.998093       1 main.go:227] handling current node
	I0709 18:44:47.998109       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:44:47.998134       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [556077ae2825] <==
	I0709 18:19:39.638553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 18:19:39.698240       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 18:19:39.700011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 18:19:39.702635       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0709 18:19:39.714433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 18:19:40.505081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0709 18:19:40.517142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0709 18:19:40.517305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 18:19:41.636583       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 18:19:41.706223       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 18:19:41.808149       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0709 18:19:41.821195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.206.134]
	I0709 18:19:41.822637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0709 18:19:41.843642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 18:19:42.609385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 18:19:42.805564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 18:19:42.871569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 18:19:42.907682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 18:19:57.333598       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0709 18:19:57.543081       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0709 18:35:55.870544       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53940: use of closed network connection
	E0709 18:35:56.795209       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53945: use of closed network connection
	E0709 18:35:57.698486       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53950: use of closed network connection
	E0709 18:36:33.178526       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53970: use of closed network connection
	E0709 18:36:43.597768       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53972: use of closed network connection
	
	
	==> kube-controller-manager [a89ee753e775] <==
	I0709 18:19:57.815368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.660854ms"
	I0709 18:19:57.815916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.6µs"
	I0709 18:19:58.007755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.828816ms"
	I0709 18:19:58.026709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.106923ms"
	I0709 18:19:58.029403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.1µs"
	I0709 18:20:07.977654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.049991ms"
	I0709 18:20:08.015594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111µs"
	I0709 18:20:09.991729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.353168ms"
	I0709 18:20:10.001112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="868.106µs"
	I0709 18:20:11.554561       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0709 18:24:17.420348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.233775ms"
	I0709 18:24:17.441694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.911551ms"
	I0709 18:24:17.444364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.629006ms"
	I0709 18:24:20.165672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.094324ms"
	I0709 18:24:20.166173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0709 18:40:08.595141       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-849000-m03\" does not exist"
	I0709 18:40:08.641712       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-849000-m03" podCIDRs=["10.244.1.0/24"]
	I0709 18:40:11.793433       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-849000-m03"
	I0709 18:40:32.591516       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-849000-m03"
	I0709 18:40:32.616362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="263.401µs"
	I0709 18:40:32.638542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.1µs"
	I0709 18:40:35.404984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.084842ms"
	I0709 18:40:35.405359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.3µs"
	I0709 18:43:56.960196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.713036ms"
	I0709 18:43:56.960330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.3µs"
	
	
	==> kube-proxy [02ab9d172768] <==
	I0709 18:19:58.913720       1 server_linux.go:69] "Using iptables proxy"
	I0709 18:19:58.935439       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.134"]
	I0709 18:19:59.002175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 18:19:59.002345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 18:19:59.002422       1 server_linux.go:165] "Using iptables Proxier"
	I0709 18:19:59.006984       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 18:19:59.008394       1 server.go:872] "Version info" version="v1.30.2"
	I0709 18:19:59.008567       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 18:19:59.012208       1 config.go:192] "Starting service config controller"
	I0709 18:19:59.012230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 18:19:59.012257       1 config.go:101] "Starting endpoint slice config controller"
	I0709 18:19:59.012263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 18:19:59.014777       1 config.go:319] "Starting node config controller"
	I0709 18:19:59.015900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 18:19:59.113145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 18:19:59.113150       1 shared_informer.go:320] Caches are synced for service config
	I0709 18:19:59.116402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8661e349d48a] <==
	W0709 18:19:40.760717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 18:19:40.760830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 18:19:40.849864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 18:19:40.850245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 18:19:40.865437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.865496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.872200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 18:19:40.872364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 18:19:40.917325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.917365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.931008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.931093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.976206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 18:19:40.976434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 18:19:41.005485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 18:19:41.005666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 18:19:41.019785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 18:19:41.020146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 18:19:41.110495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 18:19:41.110614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 18:19:41.120707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 18:19:41.122629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 18:19:41.253897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 18:19:41.254338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 18:19:43.553553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 18:40:42 multinode-849000 kubelet[2293]: E0709 18:40:42.973444    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:40:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:41:42 multinode-849000 kubelet[2293]: E0709 18:41:42.971444    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:41:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:42:42 multinode-849000 kubelet[2293]: E0709 18:42:42.972527    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:43:42 multinode-849000 kubelet[2293]: E0709 18:43:42.974622    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:44:42 multinode-849000 kubelet[2293]: E0709 18:44:42.980346    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:44:42.836455   15280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000: (11.7313846s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-849000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (120.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (169.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 node start m03 -v=7 --alsologtostderr
E0709 11:45:30.096533   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 node start m03 -v=7 --alsologtostderr: exit status 1 (1m31.5065325s)

                                                
                                                
-- stdout --
	* Starting "multinode-849000-m03" worker node in "multinode-849000" cluster
	* Restarting existing hyperv VM for "multinode-849000-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:45:04.226402   12184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 11:45:04.227936   12184 out.go:291] Setting OutFile to fd 876 ...
	I0709 11:45:04.248629   12184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:45:04.248629   12184 out.go:304] Setting ErrFile to fd 1300...
	I0709 11:45:04.248629   12184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:45:04.268701   12184 mustload.go:65] Loading cluster: multinode-849000
	I0709 11:45:04.268701   12184 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:45:04.270473   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:06.434433   12184 main.go:141] libmachine: [stdout =====>] : Off
	
	I0709 11:45:06.434433   12184 main.go:141] libmachine: [stderr =====>] : 
	W0709 11:45:06.434433   12184 host.go:58] "multinode-849000-m03" host status: Stopped
	I0709 11:45:06.438007   12184 out.go:177] * Starting "multinode-849000-m03" worker node in "multinode-849000" cluster
	I0709 11:45:06.440588   12184 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:45:06.440588   12184 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:45:06.440588   12184 cache.go:56] Caching tarball of preloaded images
	I0709 11:45:06.441969   12184 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:45:06.441969   12184 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:45:06.442608   12184 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:45:06.445353   12184 start.go:360] acquireMachinesLock for multinode-849000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:45:06.445609   12184 start.go:364] duration metric: took 85.9µs to acquireMachinesLock for "multinode-849000-m03"
	I0709 11:45:06.445795   12184 start.go:96] Skipping create...Using existing machine configuration
	I0709 11:45:06.445839   12184 fix.go:54] fixHost starting: m03
	I0709 11:45:06.446367   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:08.546450   12184 main.go:141] libmachine: [stdout =====>] : Off
	
	I0709 11:45:08.546535   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:08.546535   12184 fix.go:112] recreateIfNeeded on multinode-849000-m03: state=Stopped err=<nil>
	W0709 11:45:08.546613   12184 fix.go:138] unexpected machine state, will restart: <nil>
	I0709 11:45:08.550528   12184 out.go:177] * Restarting existing hyperv VM for "multinode-849000-m03" ...
	I0709 11:45:08.554216   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m03
	I0709 11:45:11.612629   12184 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:45:11.624962   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:11.624962   12184 main.go:141] libmachine: Waiting for host to start...
	I0709 11:45:11.625025   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:13.844659   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:13.844734   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:13.844734   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:16.323586   12184 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:45:16.335907   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:17.344715   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:19.509150   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:19.509150   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:19.509150   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:22.026433   12184 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:45:22.030364   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:23.032556   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:25.190461   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:25.196826   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:25.196978   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:27.677000   12184 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:45:27.677713   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:28.695314   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:30.861567   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:30.861567   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:30.861567   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:33.355392   12184 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:45:33.355472   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:34.378087   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:36.595043   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:36.595262   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:36.595341   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:39.077995   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:45:39.077995   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:39.089012   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:41.171975   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:41.184288   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:41.184505   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:43.672801   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:45:43.676692   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:43.676692   12184 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:45:43.679998   12184 machine.go:94] provisionDockerMachine start ...
	I0709 11:45:43.679998   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:45.756835   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:45.760630   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:45.760630   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:48.232335   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:45:48.232335   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:48.249129   12184 main.go:141] libmachine: Using SSH client type: native
	I0709 11:45:48.249955   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
	I0709 11:45:48.249955   12184 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:45:48.379265   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:45:48.379373   12184 buildroot.go:166] provisioning hostname "multinode-849000-m03"
	I0709 11:45:48.379373   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:50.448438   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:50.460660   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:50.460795   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:52.948513   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:45:52.948513   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:52.966748   12184 main.go:141] libmachine: Using SSH client type: native
	I0709 11:45:52.966748   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
	I0709 11:45:52.966748   12184 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m03 && echo "multinode-849000-m03" | sudo tee /etc/hostname
	I0709 11:45:53.114883   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m03
	
	I0709 11:45:53.114936   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:55.199931   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:55.200292   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:55.200292   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:45:57.674117   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:45:57.674117   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:57.692591   12184 main.go:141] libmachine: Using SSH client type: native
	I0709 11:45:57.693152   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
	I0709 11:45:57.693152   12184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:45:57.824393   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:45:57.824946   12184 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:45:57.825005   12184 buildroot.go:174] setting up certificates
	I0709 11:45:57.825005   12184 provision.go:84] configureAuth start
	I0709 11:45:57.825128   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:45:59.872520   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:45:59.872595   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:45:59.872595   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:46:02.359336   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:46:02.359438   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:02.359510   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:46:04.449511   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:46:04.449511   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:04.449511   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:46:06.974686   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:46:06.974686   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:06.974686   12184 provision.go:143] copyHostCerts
	I0709 11:46:06.976207   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:46:06.976207   12184 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:46:06.976207   12184 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:46:06.977112   12184 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:46:06.978972   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:46:06.978972   12184 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:46:06.978972   12184 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:46:06.979731   12184 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:46:06.980790   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:46:06.981223   12184 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:46:06.981223   12184 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:46:06.981814   12184 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:46:06.983056   12184 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m03 san=[127.0.0.1 172.18.192.32 localhost minikube multinode-849000-m03]
	I0709 11:46:07.288459   12184 provision.go:177] copyRemoteCerts
	I0709 11:46:07.298758   12184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:46:07.298758   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:46:09.347664   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:46:09.347664   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:09.359554   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:46:11.956714   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:46:11.956714   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:11.957164   12184 sshutil.go:53] new ssh client: &{IP:172.18.192.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m03\id_rsa Username:docker}
	I0709 11:46:12.062232   12184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7634282s)
	I0709 11:46:12.062232   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:46:12.062232   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:46:12.109298   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:46:12.109879   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 11:46:12.157877   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:46:12.158190   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:46:12.204844   12184 provision.go:87] duration metric: took 14.3797288s to configureAuth
	I0709 11:46:12.204897   12184 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:46:12.205856   12184 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:46:12.205856   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:46:14.285751   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:46:14.285751   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:14.285751   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:46:16.812024   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:46:16.812024   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:16.829143   12184 main.go:141] libmachine: Using SSH client type: native
	I0709 11:46:16.829926   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
	I0709 11:46:16.829926   12184 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:46:16.951643   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:46:16.951746   12184 buildroot.go:70] root file system type: tmpfs
	I0709 11:46:16.951885   12184 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:46:16.952066   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:46:19.072331   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:46:19.084257   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:19.084257   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:46:21.556905   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:46:21.569093   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:21.574753   12184 main.go:141] libmachine: Using SSH client type: native
	I0709 11:46:21.575570   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
	I0709 11:46:21.575570   12184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:46:21.721560   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:46:21.721560   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:46:23.801895   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:46:23.801895   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:23.801895   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:46:26.369725   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:46:26.369725   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:26.376462   12184 main.go:141] libmachine: Using SSH client type: native
	I0709 11:46:26.377077   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
	I0709 11:46:26.377077   12184 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:46:28.738810   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:46:28.738810   12184 machine.go:97] duration metric: took 45.0586499s to provisionDockerMachine
	I0709 11:46:28.739108   12184 start.go:293] postStartSetup for "multinode-849000-m03" (driver="hyperv")
	I0709 11:46:28.739108   12184 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:46:28.750516   12184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:46:28.750516   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
	I0709 11:46:30.881801   12184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:46:30.881801   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:30.882178   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
	I0709 11:46:33.432304   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32
	
	I0709 11:46:33.439187   12184 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:46:33.439646   12184 sshutil.go:53] new ssh client: &{IP:172.18.192.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m03\id_rsa Username:docker}
	I0709 11:46:33.549379   12184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7988459s)
	I0709 11:46:33.560329   12184 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:46:33.568759   12184 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:46:33.568759   12184 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:46:33.569540   12184 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:46:33.570768   12184 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:46:33.570840   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:46:33.582529   12184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:46:33.600399   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:46:33.645570   12184 start.go:296] duration metric: took 4.9064441s for postStartSetup
	I0709 11:46:33.645570   12184 fix.go:56] duration metric: took 1m27.1994616s for fixHost
	I0709 11:46:33.645570   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state

                                                
                                                
** /stderr **
multinode_test.go:284: W0709 11:45:04.226402   12184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0709 11:45:04.227936   12184 out.go:291] Setting OutFile to fd 876 ...
I0709 11:45:04.248629   12184 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 11:45:04.248629   12184 out.go:304] Setting ErrFile to fd 1300...
I0709 11:45:04.248629   12184 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 11:45:04.268701   12184 mustload.go:65] Loading cluster: multinode-849000
I0709 11:45:04.268701   12184 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 11:45:04.270473   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:06.434433   12184 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0709 11:45:06.434433   12184 main.go:141] libmachine: [stderr =====>] : 
W0709 11:45:06.434433   12184 host.go:58] "multinode-849000-m03" host status: Stopped
I0709 11:45:06.438007   12184 out.go:177] * Starting "multinode-849000-m03" worker node in "multinode-849000" cluster
I0709 11:45:06.440588   12184 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0709 11:45:06.440588   12184 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0709 11:45:06.440588   12184 cache.go:56] Caching tarball of preloaded images
I0709 11:45:06.441969   12184 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0709 11:45:06.441969   12184 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0709 11:45:06.442608   12184 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
I0709 11:45:06.445353   12184 start.go:360] acquireMachinesLock for multinode-849000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0709 11:45:06.445609   12184 start.go:364] duration metric: took 85.9µs to acquireMachinesLock for "multinode-849000-m03"
I0709 11:45:06.445795   12184 start.go:96] Skipping create...Using existing machine configuration
I0709 11:45:06.445839   12184 fix.go:54] fixHost starting: m03
I0709 11:45:06.446367   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:08.546450   12184 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0709 11:45:08.546535   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:08.546535   12184 fix.go:112] recreateIfNeeded on multinode-849000-m03: state=Stopped err=<nil>
W0709 11:45:08.546613   12184 fix.go:138] unexpected machine state, will restart: <nil>
I0709 11:45:08.550528   12184 out.go:177] * Restarting existing hyperv VM for "multinode-849000-m03" ...
I0709 11:45:08.554216   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m03
I0709 11:45:11.612629   12184 main.go:141] libmachine: [stdout =====>] : 
I0709 11:45:11.624962   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:11.624962   12184 main.go:141] libmachine: Waiting for host to start...
I0709 11:45:11.625025   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:13.844659   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:13.844734   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:13.844734   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:16.323586   12184 main.go:141] libmachine: [stdout =====>] : 
I0709 11:45:16.335907   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:17.344715   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:19.509150   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:19.509150   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:19.509150   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:22.026433   12184 main.go:141] libmachine: [stdout =====>] : 
I0709 11:45:22.030364   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:23.032556   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:25.190461   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:25.196826   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:25.196978   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:27.677000   12184 main.go:141] libmachine: [stdout =====>] : 
I0709 11:45:27.677713   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:28.695314   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:30.861567   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:30.861567   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:30.861567   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:33.355392   12184 main.go:141] libmachine: [stdout =====>] : 
I0709 11:45:33.355472   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:34.378087   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:36.595043   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:36.595262   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:36.595341   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:39.077995   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:45:39.077995   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:39.089012   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:41.171975   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:41.184288   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:41.184505   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:43.672801   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:45:43.676692   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:43.676692   12184 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
I0709 11:45:43.679998   12184 machine.go:94] provisionDockerMachine start ...
I0709 11:45:43.679998   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:45.756835   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:45.760630   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:45.760630   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:48.232335   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:45:48.232335   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:48.249129   12184 main.go:141] libmachine: Using SSH client type: native
I0709 11:45:48.249955   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
I0709 11:45:48.249955   12184 main.go:141] libmachine: About to run SSH command:
hostname
I0709 11:45:48.379265   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0709 11:45:48.379373   12184 buildroot.go:166] provisioning hostname "multinode-849000-m03"
I0709 11:45:48.379373   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:50.448438   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:50.460660   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:50.460795   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:52.948513   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:45:52.948513   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:52.966748   12184 main.go:141] libmachine: Using SSH client type: native
I0709 11:45:52.966748   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
I0709 11:45:52.966748   12184 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-849000-m03 && echo "multinode-849000-m03" | sudo tee /etc/hostname
I0709 11:45:53.114883   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m03

                                                
                                                
I0709 11:45:53.114936   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:55.199931   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:55.200292   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:55.200292   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:45:57.674117   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:45:57.674117   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:57.692591   12184 main.go:141] libmachine: Using SSH client type: native
I0709 11:45:57.693152   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
I0709 11:45:57.693152   12184 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-849000-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-849000-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0709 11:45:57.824393   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0709 11:45:57.824946   12184 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0709 11:45:57.825005   12184 buildroot.go:174] setting up certificates
I0709 11:45:57.825005   12184 provision.go:84] configureAuth start
I0709 11:45:57.825128   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:45:59.872520   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:45:59.872595   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:45:59.872595   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:46:02.359336   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:46:02.359438   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:02.359510   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:46:04.449511   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:46:04.449511   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:04.449511   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:46:06.974686   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:46:06.974686   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:06.974686   12184 provision.go:143] copyHostCerts
I0709 11:46:06.976207   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
I0709 11:46:06.976207   12184 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
I0709 11:46:06.976207   12184 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
I0709 11:46:06.977112   12184 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
I0709 11:46:06.978972   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
I0709 11:46:06.978972   12184 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
I0709 11:46:06.978972   12184 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
I0709 11:46:06.979731   12184 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
I0709 11:46:06.980790   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
I0709 11:46:06.981223   12184 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
I0709 11:46:06.981223   12184 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
I0709 11:46:06.981814   12184 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
I0709 11:46:06.983056   12184 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m03 san=[127.0.0.1 172.18.192.32 localhost minikube multinode-849000-m03]
I0709 11:46:07.288459   12184 provision.go:177] copyRemoteCerts
I0709 11:46:07.298758   12184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0709 11:46:07.298758   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:46:09.347664   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:46:09.347664   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:09.359554   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:46:11.956714   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:46:11.956714   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:11.957164   12184 sshutil.go:53] new ssh client: &{IP:172.18.192.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m03\id_rsa Username:docker}
I0709 11:46:12.062232   12184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7634282s)
I0709 11:46:12.062232   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0709 11:46:12.062232   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0709 11:46:12.109298   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0709 11:46:12.109879   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0709 11:46:12.157877   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0709 11:46:12.158190   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0709 11:46:12.204844   12184 provision.go:87] duration metric: took 14.3797288s to configureAuth
I0709 11:46:12.204897   12184 buildroot.go:189] setting minikube options for container-runtime
I0709 11:46:12.205856   12184 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 11:46:12.205856   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:46:14.285751   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:46:14.285751   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:14.285751   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:46:16.812024   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:46:16.812024   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:16.829143   12184 main.go:141] libmachine: Using SSH client type: native
I0709 11:46:16.829926   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
I0709 11:46:16.829926   12184 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0709 11:46:16.951643   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0709 11:46:16.951746   12184 buildroot.go:70] root file system type: tmpfs
I0709 11:46:16.951885   12184 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0709 11:46:16.952066   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:46:19.072331   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:46:19.084257   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:19.084257   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:46:21.556905   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:46:21.569093   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:21.574753   12184 main.go:141] libmachine: Using SSH client type: native
I0709 11:46:21.575570   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
I0709 11:46:21.575570   12184 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0709 11:46:21.721560   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0709 11:46:21.721560   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:46:23.801895   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:46:23.801895   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:23.801895   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:46:26.369725   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:46:26.369725   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:26.376462   12184 main.go:141] libmachine: Using SSH client type: native
I0709 11:46:26.377077   12184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.192.32 22 <nil> <nil>}
I0709 11:46:26.377077   12184 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0709 11:46:28.738810   12184 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0709 11:46:28.738810   12184 machine.go:97] duration metric: took 45.0586499s to provisionDockerMachine
I0709 11:46:28.739108   12184 start.go:293] postStartSetup for "multinode-849000-m03" (driver="hyperv")
I0709 11:46:28.739108   12184 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0709 11:46:28.750516   12184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0709 11:46:28.750516   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
I0709 11:46:30.881801   12184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 11:46:30.881801   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:30.882178   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m03 ).networkadapters[0]).ipaddresses[0]
I0709 11:46:33.432304   12184 main.go:141] libmachine: [stdout =====>] : 172.18.192.32

                                                
                                                
I0709 11:46:33.439187   12184 main.go:141] libmachine: [stderr =====>] : 
I0709 11:46:33.439646   12184 sshutil.go:53] new ssh client: &{IP:172.18.192.32 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m03\id_rsa Username:docker}
I0709 11:46:33.549379   12184 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7988459s)
I0709 11:46:33.560329   12184 ssh_runner.go:195] Run: cat /etc/os-release
I0709 11:46:33.568759   12184 info.go:137] Remote host: Buildroot 2023.02.9
I0709 11:46:33.568759   12184 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
I0709 11:46:33.569540   12184 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
I0709 11:46:33.570768   12184 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
I0709 11:46:33.570840   12184 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
I0709 11:46:33.582529   12184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0709 11:46:33.600399   12184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
I0709 11:46:33.645570   12184 start.go:296] duration metric: took 4.9064441s for postStartSetup
I0709 11:46:33.645570   12184 fix.go:56] duration metric: took 1m27.1994616s for fixHost
I0709 11:46:33.645570   12184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m03 ).state
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-849000 node start m03 -v=7 --alsologtostderr": exit status 1
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (208.2µs)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-849000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-849000 -n multinode-849000: (11.9476822s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-849000 logs -n 25: (8.2853082s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p multinode-849000 -- rollout       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:24 PDT |                     |
	|         | status deployment/busybox            |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:34 PDT | 09 Jul 24 11:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m --           |                  |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT |                     |
	|         | busybox-fc5497c4f-4hjks -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:35 PDT | 09 Jul 24 11:35 PDT |
	|         | busybox-fc5497c4f-f2j8m -- nslookup  |                  |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- get pods -o   | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-4hjks              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT | 09 Jul 24 11:36 PDT |
	|         | busybox-fc5497c4f-f2j8m              |                  |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |                   |         |                     |                     |
	| kubectl | -p multinode-849000 -- exec          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:36 PDT |                     |
	|         | busybox-fc5497c4f-f2j8m -- sh        |                  |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.192.1            |                  |                   |         |                     |                     |
	| node    | add -p multinode-849000 -v 3         | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:37 PDT | 09 Jul 24 11:40 PDT |
	|         | --alsologtostderr                    |                  |                   |         |                     |                     |
	| node    | multinode-849000 node stop m03       | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:43 PDT | 09 Jul 24 11:43 PDT |
	| node    | multinode-849000 node start          | multinode-849000 | minikube1\jenkins | v1.33.1 | 09 Jul 24 11:45 PDT |                     |
	|         | m03 -v=7 --alsologtostderr           |                  |                   |         |                     |                     |
	|---------|--------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 11:16:35
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 11:16:35.706571   11080 out.go:291] Setting OutFile to fd 1856 ...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.707294   11080 out.go:304] Setting ErrFile to fd 1916...
	I0709 11:16:35.707294   11080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 11:16:35.730175   11080 out.go:298] Setting JSON to false
	I0709 11:16:35.734088   11080 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7264,"bootTime":1720541731,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 11:16:35.734088   11080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 11:16:35.740900   11080 out.go:177] * [multinode-849000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 11:16:35.746952   11080 notify.go:220] Checking for updates...
	I0709 11:16:35.749517   11080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:16:35.752016   11080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 11:16:35.754074   11080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 11:16:35.757149   11080 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 11:16:35.759785   11080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 11:16:35.763232   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:16:35.763232   11080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 11:16:41.108594   11080 out.go:177] * Using the hyperv driver based on user configuration
	I0709 11:16:41.113436   11080 start.go:297] selected driver: hyperv
	I0709 11:16:41.113436   11080 start.go:901] validating driver "hyperv" against <nil>
	I0709 11:16:41.113436   11080 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 11:16:41.161717   11080 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 11:16:41.163562   11080 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:16:41.163562   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:16:41.163562   11080 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0709 11:16:41.163562   11080 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0709 11:16:41.163562   11080 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:16:41.164325   11080 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 11:16:41.169436   11080 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I0709 11:16:41.171790   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:16:41.171790   11080 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 11:16:41.171790   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:16:41.172900   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:16:41.173204   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:16:41.173497   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:16:41.173834   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json: {Name:mkcd76fd0991636c9ebb3945d5f6230c136234ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:360] acquireMachinesLock for multinode-849000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:16:41.175145   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-849000"
	I0709 11:16:41.175145   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:16:41.175717   11080 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 11:16:41.178833   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:16:41.179697   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:16:41.179858   11080 client.go:168] LocalClient.Create starting
	I0709 11:16:41.180393   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.180676   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181037   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:16:41.181305   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:16:41.181363   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:16:41.181499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:16:43.203242   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:43.203345   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:16:44.905448   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:44.905673   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:46.397202   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:49.977487   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:49.978001   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:49.980413   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:16:50.481409   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: Creating VM...
	I0709 11:16:50.641284   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:16:53.557163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:53.557877   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:16:53.557877   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:16:55.342337   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:55.343188   11080 main.go:141] libmachine: Creating VHD
	I0709 11:16:55.343188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:16:59.073202   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 250EFD27-3D80-4D94-9BBB-C36AC3EE4AF2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:16:59.073277   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:16:59.073277   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:16:59.081799   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:02.355243   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:02.356056   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd' -SizeBytes 20000MB
	I0709 11:17:04.920871   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:04.921598   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:04.921696   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-849000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:08.552901   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000 -DynamicMemoryEnabled $false
	I0709 11:17:10.906954   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:10.907329   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000 -Count 2
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:13.116210   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:13.117046   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\boot2docker.iso'
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:15.734658   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:15.734748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\disk.vhd'
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:18.434030   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:18.434648   11080 main.go:141] libmachine: Starting VM...
	I0709 11:17:18.434648   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000
	I0709 11:17:21.548427   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:21.548703   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:17:21.548703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:23.856308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:23.857327   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:23.857477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:26.424823   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:26.425555   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:27.429457   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:29.669589   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:29.670580   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:32.232914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:33.238604   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:35.538308   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:35.539152   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:38.144845   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:39.150748   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:41.412652   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:41.412758   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:43.945561   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:17:43.946556   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:44.948904   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:47.223333   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:47.223493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:49.888321   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:49.889173   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:52.028837   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:52.029346   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:17:52.029346   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:54.184391   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:54.184452   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:17:56.739762   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:17:56.740551   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:56.747332   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:17:56.757962   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:17:56.757962   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:17:56.888454   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:17:56.888454   11080 buildroot.go:166] provisioning hostname "multinode-849000"
	I0709 11:17:56.888632   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:17:58.994922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:17:58.996092   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:01.590853   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:01.596255   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:01.596966   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:01.596966   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000 && echo "multinode-849000" | sudo tee /etc/hostname
	I0709 11:18:01.744135   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000
	
	I0709 11:18:01.744309   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:03.902520   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:03.902843   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:06.504362   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:06.505105   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:06.511047   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:06.511730   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:06.511730   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:18:06.661183   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:18:06.661276   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:18:06.661276   11080 buildroot.go:174] setting up certificates
	I0709 11:18:06.661276   11080 provision.go:84] configureAuth start
	I0709 11:18:06.661404   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:08.870371   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:08.871487   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:08.871619   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:11.479743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:11.480657   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:13.679886   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:13.680032   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:13.680386   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:16.351593   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:16.351812   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:16.351812   11080 provision.go:143] copyHostCerts
	I0709 11:18:16.351812   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:18:16.351812   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:18:16.352341   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:18:16.352562   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:18:16.353746   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:18:16.353870   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:18:16.353870   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:18:16.354397   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:18:16.355454   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:18:16.355782   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:18:16.355782   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:18:16.356143   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:18:16.357550   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000 san=[127.0.0.1 172.18.206.134 localhost minikube multinode-849000]
	I0709 11:18:16.528750   11080 provision.go:177] copyRemoteCerts
	I0709 11:18:16.542866   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:18:16.543526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:18.745596   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:18.746390   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:18.746524   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:21.394478   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:21.394661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:21.394962   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:21.507114   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9635719s)
	I0709 11:18:21.507261   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:18:21.507746   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:18:21.555636   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:18:21.556231   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0709 11:18:21.603561   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:18:21.604047   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:18:21.651880   11080 provision.go:87] duration metric: took 14.9904677s to configureAuth
	I0709 11:18:21.651880   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:18:21.652889   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:18:21.652889   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:23.890387   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:23.891029   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:26.558618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:26.564345   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:26.565125   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:26.565125   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:18:26.688579   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:18:26.688579   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:18:26.688751   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:18:26.688751   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:28.871307   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:28.871918   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:31.492328   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:31.502951   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:31.503345   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:31.503345   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:18:31.658280   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:18:31.658412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:33.800464   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:33.800741   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:36.418307   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:36.418361   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:36.423718   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:36.423718   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:36.424298   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:18:38.623401   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:18:38.623401   11080 machine.go:97] duration metric: took 46.5939015s to provisionDockerMachine
	I0709 11:18:38.624385   11080 client.go:171] duration metric: took 1m57.4441387s to LocalClient.Create
	I0709 11:18:38.624385   11080 start.go:167] duration metric: took 1m57.4442999s to libmachine.API.Create "multinode-849000"
	I0709 11:18:38.624385   11080 start.go:293] postStartSetup for "multinode-849000" (driver="hyperv")
	I0709 11:18:38.624385   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:18:38.635377   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:18:38.635377   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:40.803077   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:40.803227   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:40.803332   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:43.382675   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:43.382675   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:18:43.483674   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8482809s)
	I0709 11:18:43.496129   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:18:43.504466   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:18:43.504466   11080 command_runner.go:130] > ID=buildroot
	I0709 11:18:43.504466   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:18:43.504466   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:18:43.504466   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:18:43.504466   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:18:43.505074   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:18:43.506014   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:18:43.506014   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:18:43.518207   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:18:43.536167   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:18:43.580014   11080 start.go:296] duration metric: took 4.955526s for postStartSetup
	I0709 11:18:43.583840   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:45.719868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:45.720485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:48.244046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:48.244917   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:18:48.247885   11080 start.go:128] duration metric: took 2m7.0717492s to createHost
	I0709 11:18:48.247974   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:50.357356   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:50.357583   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:52.888040   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:52.893710   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:52.893837   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:52.893837   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:18:53.018311   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549133.027082640
	
	I0709 11:18:53.018311   11080 fix.go:216] guest clock: 1720549133.027082640
	I0709 11:18:53.018311   11080 fix.go:229] Guest: 2024-07-09 11:18:53.02708264 -0700 PDT Remote: 2024-07-09 11:18:48.2478857 -0700 PDT m=+132.622337601 (delta=4.77919694s)
	I0709 11:18:53.018461   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:55.134647   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:55.134922   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:18:57.702062   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:57.706817   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:18:57.707574   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.206.134 22 <nil> <nil>}
	I0709 11:18:57.707574   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549133
	I0709 11:18:57.837990   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:18:53 UTC 2024
	
	I0709 11:18:57.837990   11080 fix.go:236] clock set: Tue Jul  9 18:18:53 UTC 2024
	 (err=<nil>)
	I0709 11:18:57.837990   11080 start.go:83] releasing machines lock for "multinode-849000", held for 2m16.662394s
	I0709 11:18:57.837990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:18:59.936903   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:18:59.937542   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:02.435112   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:02.440702   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:19:02.440914   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:02.450148   11080 ssh_runner.go:195] Run: cat /version.json
	I0709 11:19:02.451159   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.652335   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.652788   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:04.662070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:19:07.368844   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.369236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.369437   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:19:07.394987   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:07.395266   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:19:07.516234   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:19:07.516234   11080 command_runner.go:130] > {"iso_version": "v1.33.1-1720433170-19199", "kicbase_version": "v0.0.44-1720012048-19186", "minikube_version": "v1.33.1", "commit": "41ed6339bbe6a947e5e92015e7dd216db14d0b72"}
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: cat /version.json: (5.0661785s)
	I0709 11:19:07.516343   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0755151s)
	I0709 11:19:07.529057   11080 ssh_runner.go:195] Run: systemctl --version
	I0709 11:19:07.538439   11080 command_runner.go:130] > systemd 252 (252)
	I0709 11:19:07.538533   11080 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0709 11:19:07.550293   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:19:07.559188   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0709 11:19:07.559555   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:19:07.570397   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:19:07.596860   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:19:07.598042   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:19:07.598090   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:07.598448   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:07.631211   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:19:07.642798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:19:07.672487   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:19:07.691044   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:19:07.702345   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:19:07.737161   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.766120   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:19:07.798415   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:19:07.831110   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:19:07.865314   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:19:07.899412   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:19:07.929191   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:19:07.959649   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:19:07.977886   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:19:07.990402   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:19:08.021057   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:08.212039   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:19:08.247477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:19:08.260899   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Unit]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:19:08.287773   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:19:08.287773   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:19:08.287773   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:19:08.287773   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:19:08.287773   11080 command_runner.go:130] > [Service]
	I0709 11:19:08.287773   11080 command_runner.go:130] > Type=notify
	I0709 11:19:08.287773   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:19:08.287773   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:19:08.287773   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:19:08.287773   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:19:08.287773   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:19:08.287773   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:19:08.287773   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:19:08.287773   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:19:08.288322   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:19:08.288322   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:19:08.288322   11080 command_runner.go:130] > ExecStart=
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:19:08.288380   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:19:08.288380   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:19:08.288380   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:19:08.288491   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:19:08.288532   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:19:08.288532   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:19:08.288532   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:19:08.288603   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:19:08.288603   11080 command_runner.go:130] > Delegate=yes
	I0709 11:19:08.288603   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:19:08.288644   11080 command_runner.go:130] > KillMode=process
	I0709 11:19:08.288644   11080 command_runner.go:130] > [Install]
	I0709 11:19:08.288644   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:19:08.299913   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.334941   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:19:08.378216   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:19:08.411780   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.445847   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:19:08.504747   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:19:08.527698   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:19:08.557879   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:19:08.569949   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:19:08.575730   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:19:08.587321   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:19:08.604542   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:19:08.652744   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:19:08.860138   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:19:09.036606   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:19:09.036846   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:19:09.086669   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:09.274594   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:11.819580   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5449771s)
	I0709 11:19:11.830623   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 11:19:11.865432   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:11.899527   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 11:19:12.080125   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 11:19:12.263695   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.465673   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 11:19:12.506610   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 11:19:12.540854   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:12.740781   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 11:19:12.845180   11080 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 11:19:12.856179   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0709 11:19:12.864333   11080 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0709 11:19:12.864333   11080 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0709 11:19:12.864333   11080 command_runner.go:130] > Access: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864333   11080 command_runner.go:130] > Modify: 2024-07-09 18:19:12.773376049 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] > Change: 2024-07-09 18:19:12.777376059 +0000
	I0709 11:19:12.864643   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:12.865396   11080 start.go:562] Will wait 60s for crictl version
	I0709 11:19:12.878013   11080 ssh_runner.go:195] Run: which crictl
	I0709 11:19:12.883453   11080 command_runner.go:130] > /usr/bin/crictl
	I0709 11:19:12.896196   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 11:19:12.945750   11080 command_runner.go:130] > Version:  0.1.0
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeName:  docker
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0709 11:19:12.946800   11080 command_runner.go:130] > RuntimeApiVersion:  v1
	I0709 11:19:12.946914   11080 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 11:19:12.955749   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:12.986144   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:12.997084   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 11:19:13.033222   11080 command_runner.go:130] > 27.0.3
	I0709 11:19:13.039328   11080 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 11:19:13.039536   11080 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 11:19:13.044302   11080 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 11:19:13.047302   11080 ip.go:210] interface addr: 172.18.192.1/20
	I0709 11:19:13.058315   11080 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 11:19:13.064313   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:13.085011   11080 kubeadm.go:877] updating cluster {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 11:19:13.085193   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:19:13.094647   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:13.119600   11080 docker.go:685] Got preloaded images: 
	I0709 11:19:13.119753   11080 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.2 wasn't preloaded
	I0709 11:19:13.132471   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:13.150071   11080 command_runner.go:139] > {"Repositories":{}}
	I0709 11:19:13.160388   11080 ssh_runner.go:195] Run: which lz4
	I0709 11:19:13.168652   11080 command_runner.go:130] > /usr/bin/lz4
	I0709 11:19:13.168652   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0709 11:19:13.180500   11080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0709 11:19:13.186301   11080 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0709 11:19:13.187035   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359632088 bytes)
	I0709 11:19:14.857940   11080 docker.go:649] duration metric: took 1.6892825s to copy over tarball
	I0709 11:19:14.870175   11080 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0709 11:19:23.389025   11080 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188212s)
	I0709 11:19:23.389025   11080 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0709 11:19:23.458573   11080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0709 11:19:23.485866   11080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.2":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.2":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.2":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f389
2682e6eab8f3772"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.2":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0709 11:19:23.486188   11080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0709 11:19:23.533118   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:23.744757   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:19:27.380382   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6356119s)
	I0709 11:19:27.389977   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0709 11:19:27.415657   11080 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0709 11:19:27.415657   11080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:27.415657   11080 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 11:19:27.415657   11080 cache_images.go:84] Images are preloaded, skipping loading
	I0709 11:19:27.415657   11080 kubeadm.go:928] updating node { 172.18.206.134 8443 v1.30.2 docker true true} ...
	I0709 11:19:27.415657   11080 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-849000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.206.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 11:19:27.423616   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 11:19:27.458657   11080 command_runner.go:130] > cgroupfs
	I0709 11:19:27.459385   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:27.459385   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:27.459452   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 11:19:27.459452   11080 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.206.134 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-849000 NodeName:multinode-849000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.206.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.206.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 11:19:27.459589   11080 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.206.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-849000"
	  kubeletExtraArgs:
	    node-ip: 172.18.206.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.206.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 11:19:27.472965   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubeadm
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubectl
	I0709 11:19:27.499670   11080 command_runner.go:130] > kubelet
	I0709 11:19:27.499841   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 11:19:27.511476   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 11:19:27.527506   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0709 11:19:27.555887   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 11:19:27.582917   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0709 11:19:27.625088   11080 ssh_runner.go:195] Run: grep 172.18.206.134	control-plane.minikube.internal$ /etc/hosts
	I0709 11:19:27.629979   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.206.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0709 11:19:27.662105   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:27.863890   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:27.891871   11080 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000 for IP: 172.18.206.134
	I0709 11:19:27.891871   11080 certs.go:194] generating shared ca certs ...
	I0709 11:19:27.891974   11080 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 11:19:27.892690   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 11:19:27.893231   11080 certs.go:256] generating profile certs ...
	I0709 11:19:27.894104   11080 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key
	I0709 11:19:27.894284   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt with IP's: []
	I0709 11:19:28.075685   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt ...
	I0709 11:19:28.075685   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.crt: {Name:mk25257931a758267f442465386bb9bdebfd15e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.077683   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key ...
	I0709 11:19:28.077683   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\client.key: {Name:mk28ea0dfb093b7e1eceacf2d9e8a6ee777dbd4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.078679   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab
	I0709 11:19:28.078679   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.206.134]
	I0709 11:19:28.282674   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab ...
	I0709 11:19:28.282674   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab: {Name:mk6d3927cc1582195a75050ba0c963c9f3cc6b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.284187   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab ...
	I0709 11:19:28.284187   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab: {Name:mk7c2c31b56e9fbc5ac0d0a2d8ec4a706b474e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.285485   11080 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt
	I0709 11:19:28.296251   11080 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key.86d190ab -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key
	I0709 11:19:28.297243   11080 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key
	I0709 11:19:28.297243   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt with IP's: []
	I0709 11:19:28.588714   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt ...
	I0709 11:19:28.588714   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt: {Name:mk558fea8586bf42355b37f550a2aab396534e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590476   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key ...
	I0709 11:19:28.590476   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key: {Name:mk91292cc98d71191163856df723afdf525149d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0709 11:19:28.590924   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0709 11:19:28.591953   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0709 11:19:28.592200   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0709 11:19:28.592414   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0709 11:19:28.592581   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0709 11:19:28.592751   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0709 11:19:28.601940   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0709 11:19:28.602968   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 11:19:28.602968   11080 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 11:19:28.603997   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 11:19:28.604332   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 11:19:28.604696   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 11:19:28.605012   11080 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 11:19:28.605757   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem -> /usr/share/ca-certificates/15032.pem
	I0709 11:19:28.606105   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /usr/share/ca-certificates/150322.pem
	I0709 11:19:28.606281   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:28.607895   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 11:19:28.657063   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 11:19:28.708475   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 11:19:28.753169   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 11:19:28.799111   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0709 11:19:28.843096   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0709 11:19:28.892474   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 11:19:28.936778   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0709 11:19:28.983720   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 11:19:29.032197   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 11:19:29.078840   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 11:19:29.121438   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 11:19:29.166376   11080 ssh_runner.go:195] Run: openssl version
	I0709 11:19:29.174606   11080 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0709 11:19:29.186263   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 11:19:29.214563   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221452   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.221529   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.233587   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 11:19:29.241034   11080 command_runner.go:130] > 51391683
	I0709 11:19:29.253531   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 11:19:29.287599   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 11:19:29.319642   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.327043   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.340563   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 11:19:29.351251   11080 command_runner.go:130] > 3ec20f2e
	I0709 11:19:29.363289   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 11:19:29.394996   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 11:19:29.430863   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439488   11080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.439598   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.451335   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 11:19:29.461060   11080 command_runner.go:130] > b5213941
	I0709 11:19:29.472325   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 11:19:29.502349   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 11:19:29.508349   11080 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.508349   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0709 11:19:29.509336   11080 kubeadm.go:391] StartCluster: {Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 11:19:29.517326   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 11:19:29.552571   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0709 11:19:29.570263   11080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0709 11:19:29.583129   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 11:19:29.614110   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0709 11:19:29.630064   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0709 11:19:29.630668   11080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631001   11080 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0709 11:19:29.631083   11080 kubeadm.go:156] found existing configuration files:
	
	I0709 11:19:29.643858   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 11:19:29.660913   11080 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.660913   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0709 11:19:29.672874   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0709 11:19:29.701166   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 11:19:29.719398   11080 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.719398   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0709 11:19:29.732866   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0709 11:19:29.764341   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.780362   11080 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.781070   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0709 11:19:29.793378   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 11:19:29.822887   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 11:19:29.839358   11080 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.839848   11080 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0709 11:19:29.851450   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 11:19:29.868927   11080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0709 11:19:30.273184   11080 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:30.273184   11080 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0709 11:19:43.382099   11080 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [init] Using Kubernetes version: v1.30.2
	I0709 11:19:43.382134   11080 command_runner.go:130] > [preflight] Running pre-flight checks
	I0709 11:19:43.382302   11080 kubeadm.go:309] [preflight] Running pre-flight checks
	I0709 11:19:43.382490   11080 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382562   11080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0709 11:19:43.382843   11080 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0709 11:19:43.382843   11080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.382843   11080 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0709 11:19:43.385956   11080 out.go:204]   - Generating certificates and keys ...
	I0709 11:19:43.386701   11080 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0709 11:19:43.386720   11080 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0709 11:19:43.386939   11080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386963   11080 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0709 11:19:43.386994   11080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.386994   11080 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0709 11:19:43.387517   11080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387517   11080 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0709 11:19:43.387702   11080 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387746   11080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0709 11:19:43.387967   11080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.387967   11080 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0709 11:19:43.388299   11080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388370   11080 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388585   11080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388585   11080 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-849000] and IPs [172.18.206.134 127.0.0.1 ::1]
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0709 11:19:43.388889   11080 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0709 11:19:43.388889   11080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.388889   11080 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0709 11:19:43.389891   11080 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0709 11:19:43.389891   11080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.389891   11080 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0709 11:19:43.392839   11080 out.go:204]   - Booting up control plane ...
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0709 11:19:43.393848   11080 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.393848   11080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0709 11:19:43.394901   11080 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001525046s
	I0709 11:19:43.394901   11080 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.394901   11080 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0709 11:19:43.395906   11080 kubeadm.go:309] [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [api-check] The API server is healthy after 6.503156666s
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0709 11:19:43.395906   11080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0709 11:19:43.395906   11080 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.395906   11080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0709 11:19:43.396929   11080 kubeadm.go:309] [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-849000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0709 11:19:43.396929   11080 kubeadm.go:309] [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.396929   11080 command_runner.go:130] > [bootstrap-token] Using token: 2v8w3f.q6s4uugm84pg79gm
	I0709 11:19:43.399982   11080 out.go:204]   - Configuring RBAC rules ...
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0709 11:19:43.399982   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.399982   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0709 11:19:43.400850   11080 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.400850   11080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0709 11:19:43.401848   11080 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0709 11:19:43.401848   11080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0709 11:19:43.401848   11080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.401848   11080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0709 11:19:43.401848   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0709 11:19:43.402846   11080 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.402846   11080 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0709 11:19:43.402846   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0709 11:19:43.403890   11080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0709 11:19:43.403890   11080 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0709 11:19:43.403890   11080 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0709 11:19:43.403890   11080 kubeadm.go:309] 
	I0709 11:19:43.403890   11080 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 \
	I0709 11:19:43.404882   11080 command_runner.go:130] > 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 	--control-plane 
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0709 11:19:43.404882   11080 kubeadm.go:309] 
	I0709 11:19:43.404882   11080 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.404882   11080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 2v8w3f.q6s4uugm84pg79gm \
	I0709 11:19:43.405851   11080 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d5fcedcbfd32b8cad5fa7bf46f4102eab0840992b9f32126f9317f04bd040307 
	I0709 11:19:43.405851   11080 cni.go:84] Creating CNI manager for ""
	I0709 11:19:43.405851   11080 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0709 11:19:43.408882   11080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0709 11:19:43.427890   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0709 11:19:43.436838   11080 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0709 11:19:43.436838   11080 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0709 11:19:43.436838   11080 command_runner.go:130] > Access: 2024-07-09 18:17:47.269542400 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Modify: 2024-07-08 15:41:40.000000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] > Change: 2024-07-09 11:17:38.873000000 +0000
	I0709 11:19:43.436838   11080 command_runner.go:130] >  Birth: -
	I0709 11:19:43.437660   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0709 11:19:43.437724   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0709 11:19:43.486974   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0709 11:19:44.013734   11080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.028712   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0709 11:19:44.056718   11080 command_runner.go:130] > serviceaccount/kindnet created
	I0709 11:19:44.082804   11080 command_runner.go:130] > daemonset.apps/kindnet created
	I0709 11:19:44.086715   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.101878   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-849000 minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8 minikube.k8s.io/name=multinode-849000 minikube.k8s.io/primary=true
	I0709 11:19:44.115923   11080 command_runner.go:130] > -16
	I0709 11:19:44.121702   11080 ops.go:34] apiserver oom_adj: -16
	I0709 11:19:44.326882   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0709 11:19:44.332192   11080 command_runner.go:130] > node/multinode-849000 labeled
	I0709 11:19:44.342094   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.456107   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:44.849260   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:44.954493   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.356403   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.456462   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:45.855390   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:45.956473   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.355707   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.465842   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:46.857102   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:46.969191   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.359571   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.471625   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:47.845990   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:47.968255   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.348435   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.444253   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:48.849560   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:48.962518   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.355988   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.464938   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:49.857549   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:49.960971   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.358892   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.517544   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:50.859431   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:50.965459   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.346160   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.448688   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:51.850874   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:51.960813   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.349922   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.460568   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:52.858017   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:52.978603   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.347266   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.460858   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:53.852199   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:53.970042   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.358007   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.467115   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:54.847966   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:54.971538   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.352008   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.457997   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:55.855006   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:55.967023   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.356509   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.497561   11080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0709 11:19:56.848447   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0709 11:19:56.958599   11080 command_runner.go:130] > NAME      SECRETS   AGE
	I0709 11:19:56.958599   11080 command_runner.go:130] > default   0         0s
	I0709 11:19:56.958599   11080 kubeadm.go:1107] duration metric: took 12.8717652s to wait for elevateKubeSystemPrivileges
	W0709 11:19:56.958599   11080 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0709 11:19:56.958599   11080 kubeadm.go:393] duration metric: took 27.4491691s to StartCluster
	I0709 11:19:56.958599   11080 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.958599   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:56.961504   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 11:19:56.963374   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0709 11:19:56.963460   11080 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 11:19:56.963460   11080 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0709 11:19:56.963779   11080 addons.go:69] Setting default-storageclass=true in profile "multinode-849000"
	I0709 11:19:56.963724   11080 addons.go:69] Setting storage-provisioner=true in profile "multinode-849000"
	I0709 11:19:56.963837   11080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-849000"
	I0709 11:19:56.963837   11080 addons.go:234] Setting addon storage-provisioner=true in "multinode-849000"
	I0709 11:19:56.963837   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:56.963837   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:19:56.964647   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.965248   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:56.970232   11080 out.go:177] * Verifying Kubernetes components...
	I0709 11:19:56.985249   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:19:57.211673   11080 command_runner.go:130] > apiVersion: v1
	I0709 11:19:57.211752   11080 command_runner.go:130] > data:
	I0709 11:19:57.211752   11080 command_runner.go:130] >   Corefile: |
	I0709 11:19:57.211752   11080 command_runner.go:130] >     .:53 {
	I0709 11:19:57.211752   11080 command_runner.go:130] >         errors
	I0709 11:19:57.211752   11080 command_runner.go:130] >         health {
	I0709 11:19:57.211752   11080 command_runner.go:130] >            lameduck 5s
	I0709 11:19:57.211752   11080 command_runner.go:130] >         }
	I0709 11:19:57.211752   11080 command_runner.go:130] >         ready
	I0709 11:19:57.211825   11080 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0709 11:19:57.211825   11080 command_runner.go:130] >            pods insecure
	I0709 11:19:57.211825   11080 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0709 11:19:57.211825   11080 command_runner.go:130] >            ttl 30
	I0709 11:19:57.211825   11080 command_runner.go:130] >         }
	I0709 11:19:57.211825   11080 command_runner.go:130] >         prometheus :9153
	I0709 11:19:57.211825   11080 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0709 11:19:57.211914   11080 command_runner.go:130] >            max_concurrent 1000
	I0709 11:19:57.211914   11080 command_runner.go:130] >         }
	I0709 11:19:57.211914   11080 command_runner.go:130] >         cache 30
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loop
	I0709 11:19:57.211914   11080 command_runner.go:130] >         reload
	I0709 11:19:57.211914   11080 command_runner.go:130] >         loadbalance
	I0709 11:19:57.212061   11080 command_runner.go:130] >     }
	I0709 11:19:57.212061   11080 command_runner.go:130] > kind: ConfigMap
	I0709 11:19:57.212061   11080 command_runner.go:130] > metadata:
	I0709 11:19:57.212127   11080 command_runner.go:130] >   creationTimestamp: "2024-07-09T18:19:42Z"
	I0709 11:19:57.212127   11080 command_runner.go:130] >   name: coredns
	I0709 11:19:57.212127   11080 command_runner.go:130] >   namespace: kube-system
	I0709 11:19:57.212127   11080 command_runner.go:130] >   resourceVersion: "259"
	I0709 11:19:57.212301   11080 command_runner.go:130] >   uid: 7f6d77d9-aa71-4460-bf8f-36c58243a4c9
	I0709 11:19:57.212540   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0709 11:19:57.402732   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 11:19:57.866428   11080 command_runner.go:130] > configmap/coredns replaced
	I0709 11:19:57.866428   11080 start.go:946] {"host.minikube.internal": 172.18.192.1} host record injected into CoreDNS's ConfigMap
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:57.868409   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.869413   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:57.870414   11080 cert_rotation.go:137] Starting client certificate rotation controller
	I0709 11:19:57.870414   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-849000" to be "Ready" ...
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.870414   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.870414   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.870414   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.870414   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.885872   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.885872   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.885872   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.885872   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.885978   11080 round_trippers.go:580]     Audit-Id: 6bb3d639-9069-4a29-8363-06f8a9831c96
	I0709 11:19:57.886681   11080 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0709 11:19:57.886681   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:57.887054   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Audit-Id: f8472087-a57e-416c-8eb7-93f828e86e4a
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.887125   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.887125   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.887125   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.887908   11080 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"389","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:57.888641   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:57.888641   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:57.888641   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:19:57.888641   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:57.922291   11080 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0709 11:19:57.922618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Audit-Id: 71677033-c49e-4d37-8393-48341086209c
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:57.922618   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:57.922618   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:57 GMT
	I0709 11:19:57.922733   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"391","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.379300   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.379300   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.379300   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.379300   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.384286   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:19:58.384390   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384390   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 0be5af66-01cb-451f-b03f-f7b17cb342f0
	I0709 11:19:58.384457   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Audit-Id: 73b21b85-deb0-469b-929c-809b7004c7a7
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.384457   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Content-Length: 291
	I0709 11:19:58.384457   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3bcb959e-4218-460b-a50f-627bf9af6e4d","resourceVersion":"401","creationTimestamp":"2024-07-09T18:19:42Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0709 11:19:58.384457   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:58.384457   11080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-849000" context rescaled to 1 replicas
	I0709 11:19:58.870813   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:58.871025   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:58.871025   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:58.871025   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:58.873618   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:19:58.873618   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Audit-Id: ad90069a-940e-4cdb-af81-263d232584a4
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:58.873618   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:58.874322   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:58.874322   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:58 GMT
	I0709 11:19:58.874523   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.315661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.317106   11080 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 11:19:59.317937   11080 kapi.go:59] client config for multinode-849000: &rest.Config{Host:"https://172.18.206.134:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-849000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21515a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0709 11:19:59.319000   11080 addons.go:234] Setting addon default-storageclass=true in "multinode-849000"
	I0709 11:19:59.319148   11080 host.go:66] Checking if "multinode-849000" exists ...
	I0709 11:19:59.320086   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:19:59.323800   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:19:59.326790   11080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0709 11:19:59.329802   11080 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:19:59.329802   11080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0709 11:19:59.329802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:19:59.380372   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.380372   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.380485   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.380485   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.383785   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:19:59.384697   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.384697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Audit-Id: 2d911086-1ff9-4073-8947-dda5637edc43
	I0709 11:19:59.384697   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.385157   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.876671   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:19:59.876962   11080 round_trippers.go:469] Request Headers:
	I0709 11:19:59.876962   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:19:59.876962   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:19:59.882163   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:19:59.882430   11080 round_trippers.go:577] Response Headers:
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:19:59.882513   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:19:59 GMT
	I0709 11:19:59.882513   11080 round_trippers.go:580]     Audit-Id: ad80d923-4aa0-4499-baf3-ad4ec184183d
	I0709 11:19:59.882575   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:19:59.883719   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:19:59.884541   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:00.380571   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.380571   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.380571   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.380571   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.383966   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:00.384064   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Audit-Id: 4a57b8ec-36c2-4d90-9953-8040b268ad72
	I0709 11:20:00.384064   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.384193   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.384193   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.384227   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.384339   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:00.874487   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:00.874487   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:00.874577   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:00.874577   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:00.878085   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:00.878446   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Audit-Id: 7a79b48d-490c-45b9-8151-9d41d845548a
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:00.878446   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:00.878446   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:00 GMT
	I0709 11:20:00.878824   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.384736   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.384736   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.384736   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.384736   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.389692   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:01.389768   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.389768   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.389768   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.389862   11080 round_trippers.go:580]     Audit-Id: 1717079c-a1a4-4056-ab5c-ebb223423669
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.389950   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.389950   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.391360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.648493   11080 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:01.648493   11080 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0709 11:20:01.648493   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000 ).state
	I0709 11:20:01.693665   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:01.693737   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:01.693813   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:01.876763   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:01.876763   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:01.876763   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:01.876763   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:01.879377   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:01.879377   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:01.879377   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:01.879377   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:01 GMT
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Audit-Id: 0ed34bf6-0054-408f-9605-05f03b8f80e6
	I0709 11:20:01.880274   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:01.880494   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.384156   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.384242   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.384242   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.384242   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.387596   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:02.388425   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.388425   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.388519   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.388569   11080 round_trippers.go:580]     Audit-Id: 259b4cd6-103a-46f6-84e4-4843fc15af0a
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.388617   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.389015   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:02.389720   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:02.877416   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:02.877512   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:02.877583   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:02.877583   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:02.880264   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:02.880264   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Audit-Id: 5562798d-5a0c-40f4-971f-b148e1abc842
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:02.880264   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:02.880264   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:02 GMT
	I0709 11:20:02.881513   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.385289   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.385402   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.385505   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.385568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.388996   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.389181   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.389181   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Audit-Id: 4ecfd387-5cb9-439c-becc-8c20cdb41af7
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.389181   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.389360   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.879716   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:03.879972   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:03.879972   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:03.879972   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:03.883598   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:03.883598   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:03.883598   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:03.883598   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:03 GMT
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Audit-Id: ec1efeda-bf31-45f7-a76f-11d053440253
	I0709 11:20:03.883946   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:03.884488   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:03.951175   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:03.951212   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:03.951320   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:04.384770   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.384770   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.384770   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.384770   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.390877   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:04.390877   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.390877   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.390877   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Audit-Id: 2dfefc86-a830-4942-9bba-6769c2bc2c15
	I0709 11:20:04.391003   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.391263   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:04.391723   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:04.417029   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:04.417846   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:04.417999   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:04.559903   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0709 11:20:04.876248   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:04.876326   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:04.876326   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:04.876326   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:04.879898   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:04.879898   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Audit-Id: 1a6b0670-7193-473e-b8b3-1e5ed801e6c2
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:04.879898   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:04.879898   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:04 GMT
	I0709 11:20:04.880302   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.131215   11080 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0709 11:20:05.131215   11080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0709 11:20:05.131215   11080 command_runner.go:130] > pod/storage-provisioner created
	I0709 11:20:05.382732   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.382846   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.382846   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.382940   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.385465   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:05.385465   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Audit-Id: a9b472dd-22b2-460d-9517-6e634e4a101a
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.385465   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.385465   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.386469   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:05.875363   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:05.875363   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:05.875363   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:05.875363   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:05.879073   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:05.879530   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Audit-Id: 27ad306f-2225-40f7-8dc1-fa87ab3246f1
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:05.879530   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:05.879530   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:05.879646   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:05.879646   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:05 GMT
	I0709 11:20:05.880110   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.381697   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.381697   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.381697   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.381697   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.385207   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.385655   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Audit-Id: 696fd9a0-d92d-43a9-8bb1-bfc5d15a688d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.385720   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.385720   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.385720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stdout =====>] : 172.18.206.134
	
	I0709 11:20:06.619407   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:06.619934   11080 sshutil.go:53] new ssh client: &{IP:172.18.206.134 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000\id_rsa Username:docker}
	I0709 11:20:06.761070   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0709 11:20:06.873491   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:06.873559   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.873559   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.873615   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.876478   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.876544   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Audit-Id: efcee314-8dd6-4c48-a1a6-4bf059942d04
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.876544   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.876612   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.876612   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.876721   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:06.877563   11080 node_ready.go:53] node "multinode-849000" has status "Ready":"False"
	I0709 11:20:06.908144   11080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0709 11:20:06.908847   11080 round_trippers.go:463] GET https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses
	I0709 11:20:06.908910   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.908910   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.908910   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.912483   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:06.912686   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Length: 1273
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Audit-Id: 739ee856-002a-4545-9544-df6be0efec2a
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.912686   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.912686   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.912921   11080 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0709 11:20:06.913516   11080 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.913596   11080 round_trippers.go:463] PUT https://172.18.206.134:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0709 11:20:06.913596   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:06.913704   11080 round_trippers.go:473]     Content-Type: application/json
	I0709 11:20:06.913704   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:06.916586   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:06.916586   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:06 GMT
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Audit-Id: a5ae0cbf-9df0-489a-8da4-2e8f3aa910ad
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:06.916586   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:06.916586   11080 round_trippers.go:580]     Content-Length: 1220
	I0709 11:20:06.917609   11080 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6df8fae-4e71-4bc4-9387-9deb156fda60","resourceVersion":"425","creationTimestamp":"2024-07-09T18:20:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-09T18:20:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0709 11:20:06.921571   11080 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0709 11:20:06.923563   11080 addons.go:510] duration metric: took 9.9600694s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0709 11:20:07.375568   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.375568   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.375568   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.375568   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.378569   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:07.379620   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Audit-Id: bd77f714-dc63-4d2c-bf78-52162a6b64d7
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.379620   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.379620   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.380117   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:07.875799   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:07.875861   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:07.875861   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:07.875861   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:07.880450   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:07.880704   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Audit-Id: 74d6bf60-f1ad-4db1-861f-6ea7ba47b092
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:07.880704   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:07.880704   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:07 GMT
	I0709 11:20:07.881227   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"340","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0709 11:20:08.380911   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.381007   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.381007   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.381059   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.384650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.384650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Audit-Id: 46699637-e1f2-4ffe-9a5a-606601b7ce76
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.384650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.385170   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.385170   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.385430   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.385689   11080 node_ready.go:49] node "multinode-849000" has status "Ready":"True"
	I0709 11:20:08.385689   11080 node_ready.go:38] duration metric: took 10.5152391s for node "multinode-849000" to be "Ready" ...
	I0709 11:20:08.385689   11080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:08.385689   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:08.385689   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.385689   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.385689   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.389650   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:08.389650   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.389650   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.389650   11080 round_trippers.go:580]     Audit-Id: c7a373c1-e4d1-49a7-b63d-f1f5fe5cbdfe
	I0709 11:20:08.391677   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0709 11:20:08.396680   11080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:08.396680   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.396680   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.396680   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.397654   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.401662   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:08.401662   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.402211   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Audit-Id: f0c73321-6fb5-4d40-a2ca-139f50a7329a
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.402211   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.402451   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.403030   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.403030   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.403030   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.403030   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.409674   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:08.409674   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.409674   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.410244   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Audit-Id: f9f6bf0c-50a8-416b-b487-7a0381a93ada
	I0709 11:20:08.410244   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.411023   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:08.904464   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:08.904538   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.904538   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.904538   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.924115   11080 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0709 11:20:08.924115   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.924115   11080 round_trippers.go:580]     Audit-Id: 5c7a83f8-f6fb-40c3-af41-44c2d80fb1eb
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.924267   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.924267   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.924509   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:08.925643   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:08.925643   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:08.925643   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:08.925643   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:08.942620   11080 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0709 11:20:08.943087   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Audit-Id: 1a00f334-2356-4158-b461-0e0c6821e0b6
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:08.943087   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:08.943087   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:08 GMT
	I0709 11:20:08.945720   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.412235   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.412389   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.412389   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.412389   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.417018   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.417018   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.417696   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.417696   11080 round_trippers.go:580]     Audit-Id: 1bacafec-faf2-4175-9ce5-e5206b1140e1
	I0709 11:20:09.417950   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"433","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0709 11:20:09.418720   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.418777   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.418777   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.418777   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.421159   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.421159   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Audit-Id: 2bf8156c-3153-4e3e-b8c5-b1b8a2e4e26e
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.421886   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.421886   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.423016   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.901337   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lzsvc
	I0709 11:20:09.901337   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.901337   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.901337   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.953926   11080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0709 11:20:09.953926   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.953926   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.953926   11080 round_trippers.go:580]     Audit-Id: 1aada5b5-53a1-4882-b982-815daf34a5c5
	I0709 11:20:09.955836   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0709 11:20:09.956635   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.956732   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.956732   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.956732   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.959094   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:09.959094   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.959094   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.959094   11080 round_trippers.go:580]     Audit-Id: ae59e9a3-f8ac-437b-9c75-8931309c73ad
	I0709 11:20:09.960120   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.960120   11080 pod_ready.go:92] pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.960661   11080 pod_ready.go:81] duration metric: took 1.5639759s for pod "coredns-7db6d8ff4d-lzsvc" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.960661   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-849000
	I0709 11:20:09.960661   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.960828   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.960828   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.969075   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.969075   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.969075   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Audit-Id: a17b78fa-415e-466e-8ae8-a1653319ab27
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.969075   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.969743   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-849000","namespace":"kube-system","uid":"d9414b5f-b783-46b5-bd41-e07fbd338491","resourceVersion":"303","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.206.134:2379","kubernetes.io/config.hash":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.mirror":"00d2810dc12d239da90f0b685d490ea1","kubernetes.io/config.seen":"2024-07-09T18:19:42.812164051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0709 11:20:09.969743   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.970269   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.970321   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.970321   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.979269   11080 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0709 11:20:09.979269   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Audit-Id: cfddc806-0d43-46bb-bd86-3712a4ab9215
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.979269   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.979269   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.979994   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.980431   11080 pod_ready.go:92] pod "etcd-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.980497   11080 pod_ready.go:81] duration metric: took 19.7697ms for pod "etcd-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980497   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.980690   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-849000
	I0709 11:20:09.980722   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.980753   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.980753   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.984639   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:09.984639   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Audit-Id: 4f8bf9fa-3246-46ce-b3d4-8ea91623128e
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.984639   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.984639   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:09 GMT
	I0709 11:20:09.985248   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-849000","namespace":"kube-system","uid":"185dfcae-7f97-43a4-8ad7-9c2e18ad93f4","resourceVersion":"300","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.206.134:8443","kubernetes.io/config.hash":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.mirror":"6c5aecb1b8c6cc09708145a5a2b910e7","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165051Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0709 11:20:09.986253   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:09.986253   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.986320   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:09.986320   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.990658   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:09.990658   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:09.990658   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Audit-Id: fc9d97ed-a036-474e-af5f-aba9fc7ea966
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:09.990658   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:09.991081   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:09.991515   11080 pod_ready.go:92] pod "kube-apiserver-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:09.991547   11080 pod_ready.go:81] duration metric: took 11.0006ms for pod "kube-apiserver-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991547   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:09.991623   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-849000
	I0709 11:20:09.991803   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:09.991803   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:09.991803   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.002697   11080 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0709 11:20:10.002697   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.002697   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.002697   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Audit-Id: 5618d530-048d-4e22-a41f-dbc85f57723c
	I0709 11:20:10.003122   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.003187   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.003187   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.003445   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-849000","namespace":"kube-system","uid":"84786301-1bd7-4d77-900b-1130c36259bc","resourceVersion":"316","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.mirror":"80042b784bd1b89970e61035824d0df9","kubernetes.io/config.seen":"2024-07-09T18:19:42.812165951Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0709 11:20:10.004195   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.004275   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.004275   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.004275   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.011235   11080 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0709 11:20:10.011235   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Audit-Id: b83b8a86-c88b-4eda-adbc-8a4b41174f1d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.011235   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.011235   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.011896   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.012314   11080 pod_ready.go:92] pod "kube-controller-manager-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.012440   11080 pod_ready.go:81] duration metric: took 20.8924ms for pod "kube-controller-manager-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012440   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.012550   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qv64t
	I0709 11:20:10.012621   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.012662   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.012662   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.016102   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.016102   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Audit-Id: 9328b861-5000-4723-bef4-66bdf082cdc5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.016102   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.016102   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.016102   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qv64t","generateName":"kube-proxy-","namespace":"kube-system","uid":"64fd2bca-c117-405b-98c4-db980781839b","resourceVersion":"407","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"20beb658-ecf0-4085-ad20-237b0700e9f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20beb658-ecf0-4085-ad20-237b0700e9f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0709 11:20:10.017415   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.017554   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.017554   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.017554   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.021755   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.021755   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.021885   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Audit-Id: 7b57217c-1b40-42ea-bd05-ba32c6c09379
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.021885   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.022911   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.023043   11080 pod_ready.go:92] pod "kube-proxy-qv64t" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.023043   11080 pod_ready.go:81] duration metric: took 10.6037ms for pod "kube-proxy-qv64t" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.023043   11080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.182509   11080 request.go:629] Waited for 159.4656ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182778   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-849000
	I0709 11:20:10.182865   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.182865   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.182897   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.186242   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.186242   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Audit-Id: 821c7888-15a2-4ad9-a6ba-adc53ab8a4f6
	I0709 11:20:10.186242   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.186554   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.186554   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.186784   11080 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-849000","namespace":"kube-system","uid":"03dff506-a8f6-41bd-baac-1ef9ad6892e3","resourceVersion":"323","creationTimestamp":"2024-07-09T18:19:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.mirror":"195da26e303fc74234d776f2fa95376a","kubernetes.io/config.seen":"2024-07-09T18:19:42.812159751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0709 11:20:10.385659   11080 request.go:629] Waited for 198.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes/multinode-849000
	I0709 11:20:10.385659   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.385659   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.385659   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.389558   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.389771   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Audit-Id: 9cc904cb-e823-4a93-85c2-226f98e81fdf
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.389771   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.389771   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.390169   11080 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-09T18:19:39Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0709 11:20:10.390760   11080 pod_ready.go:92] pod "kube-scheduler-multinode-849000" in "kube-system" namespace has status "Ready":"True"
	I0709 11:20:10.390865   11080 pod_ready.go:81] duration metric: took 367.8204ms for pod "kube-scheduler-multinode-849000" in "kube-system" namespace to be "Ready" ...
	I0709 11:20:10.390865   11080 pod_ready.go:38] duration metric: took 2.0051694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0709 11:20:10.390944   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0709 11:20:10.403679   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0709 11:20:10.435279   11080 command_runner.go:130] > 2115
	I0709 11:20:10.436278   11080 api_server.go:72] duration metric: took 13.4725942s to wait for apiserver process to appear ...
	I0709 11:20:10.436474   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0709 11:20:10.436474   11080 api_server.go:253] Checking apiserver healthz at https://172.18.206.134:8443/healthz ...
	I0709 11:20:10.445806   11080 api_server.go:279] https://172.18.206.134:8443/healthz returned 200:
	ok
	I0709 11:20:10.445926   11080 round_trippers.go:463] GET https://172.18.206.134:8443/version
	I0709 11:20:10.445926   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.445926   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.445926   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.448281   11080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0709 11:20:10.448281   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Audit-Id: 7be21a54-db6a-4318-a5ec-f0cce4ef44ab
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.448489   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.448527   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.448527   11080 round_trippers.go:580]     Content-Length: 263
	I0709 11:20:10.448527   11080 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0709 11:20:10.448527   11080 api_server.go:141] control plane version: v1.30.2
	I0709 11:20:10.448527   11080 api_server.go:131] duration metric: took 12.0534ms to wait for apiserver health ...
	I0709 11:20:10.448527   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0709 11:20:10.589225   11080 request.go:629] Waited for 140.697ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.589493   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.589493   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.589493   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.594092   11080 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0709 11:20:10.594092   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Audit-Id: 2b8208e7-66c3-407d-a513-81f6367a1a50
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.594092   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.594092   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.594453   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.594453   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.596104   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.598949   11080 system_pods.go:59] 8 kube-system pods found
	I0709 11:20:10.599087   11080 system_pods.go:61] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.599087   11080 system_pods.go:61] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.599087   11080 system_pods.go:74] duration metric: took 150.5589ms to wait for pod list to return data ...
	I0709 11:20:10.599087   11080 default_sa.go:34] waiting for default service account to be created ...
	I0709 11:20:10.792113   11080 request.go:629] Waited for 192.5919ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792224   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/default/serviceaccounts
	I0709 11:20:10.792412   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.792412   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.792412   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.796230   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:10.796230   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.796230   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.796230   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Content-Length: 261
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Audit-Id: bc150d93-fb7c-4582-beac-a89c1e26ce41
	I0709 11:20:10.796858   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.796858   11080 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1dc179c9-669f-4ab7-8a39-5d6fc6670d2d","resourceVersion":"341","creationTimestamp":"2024-07-09T18:19:56Z"}}]}
	I0709 11:20:10.797248   11080 default_sa.go:45] found service account: "default"
	I0709 11:20:10.797329   11080 default_sa.go:55] duration metric: took 198.009ms for default service account to be created ...
	I0709 11:20:10.797329   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0709 11:20:10.981424   11080 request.go:629] Waited for 183.8495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981505   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/namespaces/kube-system/pods
	I0709 11:20:10.981752   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:10.981752   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:10.981752   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:10.987139   11080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0709 11:20:10.987139   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:10.987139   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:10 GMT
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Audit-Id: dc7e70c7-c26f-47bd-af7e-e59f9f0e6a12
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:10.987854   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:10.987854   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:10.990198   11080 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-lzsvc","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"60a93e1a-4a5e-48c2-928c-8fe65dbae368","resourceVersion":"444","creationTimestamp":"2024-07-09T18:19:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-09T18:19:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5e87ef-eebc-4813-9c3b-7c2ee4cd469b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0709 11:20:10.994984   11080 system_pods.go:86] 8 kube-system pods found
	I0709 11:20:10.994984   11080 system_pods.go:89] "coredns-7db6d8ff4d-lzsvc" [60a93e1a-4a5e-48c2-928c-8fe65dbae368] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "etcd-multinode-849000" [d9414b5f-b783-46b5-bd41-e07fbd338491] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kindnet-8ww8c" [368f9f8e-ffb7-4e8d-9b53-a17f8d1f61e1] Running
	I0709 11:20:10.994984   11080 system_pods.go:89] "kube-apiserver-multinode-849000" [185dfcae-7f97-43a4-8ad7-9c2e18ad93f4] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-controller-manager-multinode-849000" [84786301-1bd7-4d77-900b-1130c36259bc] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-proxy-qv64t" [64fd2bca-c117-405b-98c4-db980781839b] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "kube-scheduler-multinode-849000" [03dff506-a8f6-41bd-baac-1ef9ad6892e3] Running
	I0709 11:20:10.995749   11080 system_pods.go:89] "storage-provisioner" [6c7d1b2d-c741-4944-ad9f-17ee3c9f881e] Running
	I0709 11:20:10.995749   11080 system_pods.go:126] duration metric: took 198.4185ms to wait for k8s-apps to be running ...
	I0709 11:20:10.995749   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0709 11:20:11.006411   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0709 11:20:11.032299   11080 system_svc.go:56] duration metric: took 36.2519ms WaitForService to wait for kubelet
	I0709 11:20:11.032384   11080 kubeadm.go:576] duration metric: took 14.0686983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0709 11:20:11.032449   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0709 11:20:11.185036   11080 request.go:629] Waited for 152.3704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:463] GET https://172.18.206.134:8443/api/v1/nodes
	I0709 11:20:11.185036   11080 round_trippers.go:469] Request Headers:
	I0709 11:20:11.185036   11080 round_trippers.go:473]     Accept: application/json, */*
	I0709 11:20:11.185036   11080 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0709 11:20:11.188676   11080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0709 11:20:11.188676   11080 round_trippers.go:577] Response Headers:
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Cache-Control: no-cache, private
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Content-Type: application/json
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ffd0f8b4-9df2-4b08-ad7c-a1b6570e0c9d
	I0709 11:20:11.189611   11080 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c98adc73-7fe5-466b-a547-878ac71f95a5
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Date: Tue, 09 Jul 2024 18:20:11 GMT
	I0709 11:20:11.189611   11080 round_trippers.go:580]     Audit-Id: de445958-d4f3-421b-bce6-7208e043ef68
	I0709 11:20:11.189854   11080 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-849000","uid":"4f334ff6-9d65-4542-82b7-4f0e667affd2","resourceVersion":"428","creationTimestamp":"2024-07-09T18:19:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-849000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"735571997edb61950a92942d429109b921865fd8","minikube.k8s.io/name":"multinode-849000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_09T11_19_44_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0709 11:20:11.190610   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0709 11:20:11.190610   11080 node_conditions.go:123] node cpu capacity is 2
	I0709 11:20:11.190610   11080 node_conditions.go:105] duration metric: took 158.1605ms to run NodePressure ...
	I0709 11:20:11.190610   11080 start.go:240] waiting for startup goroutines ...
	I0709 11:20:11.190610   11080 start.go:245] waiting for cluster config update ...
	I0709 11:20:11.190610   11080 start.go:254] writing updated cluster config ...
	I0709 11:20:11.194395   11080 out.go:177] 
	I0709 11:20:11.197726   11080 config.go:182] Loaded profile config "ha-400600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:20:11.205140   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.210868   11080 out.go:177] * Starting "multinode-849000-m02" worker node in "multinode-849000" cluster
	I0709 11:20:11.213536   11080 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 11:20:11.214479   11080 cache.go:56] Caching tarball of preloaded images
	I0709 11:20:11.214815   11080 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 11:20:11.215058   11080 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 11:20:11.215282   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:20:11.219596   11080 start.go:360] acquireMachinesLock for multinode-849000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 11:20:11.219782   11080 start.go:364] duration metric: took 159µs to acquireMachinesLock for "multinode-849000-m02"
	I0709 11:20:11.219811   11080 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.206.134 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0709 11:20:11.219811   11080 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0709 11:20:11.223353   11080 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0709 11:20:11.223353   11080 start.go:159] libmachine.API.Create for "multinode-849000" (driver="hyperv")
	I0709 11:20:11.223353   11080 client.go:168] LocalClient.Create starting
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224120   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224657   11080 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Decoding PEM data...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: Parsing certificate...
	I0709 11:20:11.224899   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 11:20:13.151358   11080 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 11:20:13.151782   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:13.151847   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 11:20:14.883405   11080 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 11:20:14.883642   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:14.883703   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:16.387624   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:20.077547   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:20.080459   11080 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 11:20:20.573750   11080 main.go:141] libmachine: Creating SSH key...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: Creating VM...
	I0709 11:20:20.673146   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 11:20:23.656383   11080 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 11:20:23.657490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:23.657490   11080 main.go:141] libmachine: Using switch "Default Switch"
	I0709 11:20:23.657579   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 11:20:25.447051   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:25.447625   11080 main.go:141] libmachine: Creating VHD
	I0709 11:20:25.447625   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5E53C6D0-5109-4D35-B1EC-1393270CA44B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 11:20:29.273905   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing magic tar header
	I0709 11:20:29.273905   11080 main.go:141] libmachine: Writing SSH key tar header
	I0709 11:20:29.284763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 11:20:32.544147   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:32.544825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:32.544942   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd' -SizeBytes 20000MB
	I0709 11:20:35.179825   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:35.180360   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-849000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:38.909496   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-849000-m02 -DynamicMemoryEnabled $false
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:41.254998   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-849000-m02 -Count 2
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:43.473410   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:43.474205   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\boot2docker.iso'
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:46.096786   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:46.097188   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-849000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\disk.vhd'
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:49.141245   11080 main.go:141] libmachine: Starting VM...
	I0709 11:20:49.141353   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-849000-m02
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:52.444134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:52.444588   11080 main.go:141] libmachine: Waiting for host to start...
	I0709 11:20:52.444802   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:20:54.848247   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:54.848352   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:20:57.488165   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:20:57.488298   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:20:58.493459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:00.760138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:00.761195   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:03.353161   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:03.353743   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:04.368700   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:06.644937   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:06.645938   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:09.179018   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:10.193913   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:12.497612   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stdout =====>] : 
	I0709 11:21:15.079942   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:16.096106   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:18.442305   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:18.442661   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:21.066016   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:23.278705   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:23.279312   11080 machine.go:94] provisionDockerMachine start ...
	I0709 11:21:23.279415   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:25.559526   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:25.560574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:25.560679   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:28.232227   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:28.233232   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:28.238921   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:28.250822   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:28.250822   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 11:21:28.388458   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0709 11:21:28.388571   11080 buildroot.go:166] provisioning hostname "multinode-849000-m02"
	I0709 11:21:28.388571   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:30.617630   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:30.618011   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:33.212355   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:33.212671   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:33.219551   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:33.220082   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:33.220082   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-849000-m02 && echo "multinode-849000-m02" | sudo tee /etc/hostname
	I0709 11:21:33.391210   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-849000-m02
	
	I0709 11:21:33.391343   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:35.578362   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:35.578543   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:38.185507   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:38.191886   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:38.192615   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:38.192615   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-849000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-849000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-849000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 11:21:38.341565   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 11:21:38.341639   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 11:21:38.341639   11080 buildroot.go:174] setting up certificates
	I0709 11:21:38.341639   11080 provision.go:84] configureAuth start
	I0709 11:21:38.341639   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:40.516763   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:43.076385   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:43.076717   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:45.280910   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:45.281082   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:45.281156   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:47.878898   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:47.878898   11080 provision.go:143] copyHostCerts
	I0709 11:21:47.879605   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0709 11:21:47.880180   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 11:21:47.880180   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 11:21:47.880971   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 11:21:47.882540   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0709 11:21:47.883125   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 11:21:47.883125   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 11:21:47.883679   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 11:21:47.885058   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0709 11:21:47.885436   11080 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 11:21:47.885557   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 11:21:47.886134   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 11:21:47.887498   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-849000-m02 san=[127.0.0.1 172.18.205.211 localhost minikube multinode-849000-m02]
	I0709 11:21:48.001674   11080 provision.go:177] copyRemoteCerts
	I0709 11:21:48.013068   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 11:21:48.014084   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:50.250018   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:50.250215   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:50.250314   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:52.836979   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:52.837914   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:52.838808   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:21:52.940691   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9274594s)
	I0709 11:21:52.940691   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0709 11:21:52.941438   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0709 11:21:52.990054   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0709 11:21:52.990054   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 11:21:53.038708   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0709 11:21:53.039254   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0709 11:21:53.086100   11080 provision.go:87] duration metric: took 14.7444116s to configureAuth
	I0709 11:21:53.086158   11080 buildroot.go:189] setting minikube options for container-runtime
	I0709 11:21:53.086860   11080 config.go:182] Loaded profile config "multinode-849000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 11:21:53.086990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:21:55.350257   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:21:55.351179   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:55.351218   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:21:57.991084   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:21:57.996542   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:21:57.997434   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:21:57.997434   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 11:21:58.134576   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 11:21:58.134576   11080 buildroot.go:70] root file system type: tmpfs
	I0709 11:21:58.135124   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 11:21:58.135124   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:00.283090   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:00.284070   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:00.284213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:02.866133   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:02.866377   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:02.871379   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:02.872132   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:02.872132   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.206.134"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 11:22:03.038743   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.206.134
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 11:22:03.038743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:05.225005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:05.225105   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:07.810075   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:07.815935   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:07.816766   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:07.816766   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 11:22:10.033737   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0709 11:22:10.033805   11080 machine.go:97] duration metric: took 46.7543344s to provisionDockerMachine
	I0709 11:22:10.033805   11080 client.go:171] duration metric: took 1m58.8100611s to LocalClient.Create
	I0709 11:22:10.033904   11080 start.go:167] duration metric: took 1m58.81016s to libmachine.API.Create "multinode-849000"
	I0709 11:22:10.033904   11080 start.go:293] postStartSetup for "multinode-849000-m02" (driver="hyperv")
	I0709 11:22:10.033904   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 11:22:10.049483   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 11:22:10.049483   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:12.196574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:12.196759   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:14.773966   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:14.774211   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:14.774388   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:14.880469   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8308404s)
	I0709 11:22:14.893820   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 11:22:14.900205   11080 command_runner.go:130] > NAME=Buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0709 11:22:14.900586   11080 command_runner.go:130] > ID=buildroot
	I0709 11:22:14.900586   11080 command_runner.go:130] > VERSION_ID=2023.02.9
	I0709 11:22:14.900586   11080 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0709 11:22:14.900878   11080 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 11:22:14.900958   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 11:22:14.901694   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 11:22:14.902949   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 11:22:14.903007   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> /etc/ssl/certs/150322.pem
	I0709 11:22:14.914648   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 11:22:14.931988   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 11:22:14.976672   11080 start.go:296] duration metric: took 4.9427507s for postStartSetup
	I0709 11:22:14.980296   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:17.149588   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:17.149683   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:19.730750   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:19.731744   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-849000\config.json ...
	I0709 11:22:19.734373   11080 start.go:128] duration metric: took 2m8.5141378s to createHost
	I0709 11:22:19.734498   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:21.884569   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:21.885475   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:21.885570   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:24.454687   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:24.462310   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:24.462866   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:24.462866   11080 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 11:22:24.602515   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720549344.609926885
	
	I0709 11:22:24.602629   11080 fix.go:216] guest clock: 1720549344.609926885
	I0709 11:22:24.602629   11080 fix.go:229] Guest: 2024-07-09 11:22:24.609926885 -0700 PDT Remote: 2024-07-09 11:22:19.7344985 -0700 PDT m=+344.108245701 (delta=4.875428385s)
	I0709 11:22:24.602743   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:26.788501   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:26.789399   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:29.316158   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:29.322797   11080 main.go:141] libmachine: Using SSH client type: native
	I0709 11:22:29.323325   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.205.211 22 <nil> <nil>}
	I0709 11:22:29.323492   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720549344
	I0709 11:22:29.467864   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 18:22:24 UTC 2024
	
	I0709 11:22:29.467922   11080 fix.go:236] clock set: Tue Jul  9 18:22:24 UTC 2024
	 (err=<nil>)
	I0709 11:22:29.467976   11080 start.go:83] releasing machines lock for "multinode-849000-m02", held for 2m18.2477075s
	I0709 11:22:29.468213   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:31.622432   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:31.623654   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:31.623715   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:34.179998   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:34.183731   11080 out.go:177] * Found network options:
	I0709 11:22:34.186860   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.188920   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.191174   11080 out.go:177]   - NO_PROXY=172.18.206.134
	W0709 11:22:34.194227   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	W0709 11:22:34.195301   11080 proxy.go:119] fail to check proxy env: Error ip not in block
	I0709 11:22:34.198398   11080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 11:22:34.198526   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:34.208413   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0709 11:22:34.209355   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-849000-m02 ).state
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.473806   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:36.474776   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:36.474885   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-849000-m02 ).networkadapters[0]).ipaddresses[0]
	I0709 11:22:39.120904   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.121123   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.121331   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stdout =====>] : 172.18.205.211
	
	I0709 11:22:39.150109   11080 main.go:141] libmachine: [stderr =====>] : 
	I0709 11:22:39.150109   11080 sshutil.go:53] new ssh client: &{IP:172.18.205.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-849000-m02\id_rsa Username:docker}
	I0709 11:22:39.214930   11080 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0709 11:22:39.216101   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0076706s)
	W0709 11:22:39.216101   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 11:22:39.228355   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 11:22:39.361349   11080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0709 11:22:39.361418   11080 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0709 11:22:39.361418   11080 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1630028s)
	I0709 11:22:39.361567   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 11:22:39.361605   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:39.361773   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:39.395534   11080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0709 11:22:39.411076   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 11:22:39.440578   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 11:22:39.459507   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 11:22:39.472271   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 11:22:39.503478   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.535129   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 11:22:39.565594   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 11:22:39.596645   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 11:22:39.626303   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 11:22:39.657871   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 11:22:39.687857   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 11:22:39.718726   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 11:22:39.737354   11080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0709 11:22:39.750092   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 11:22:39.780554   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:39.961136   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 11:22:40.003477   11080 start.go:494] detecting cgroup driver to use...
	I0709 11:22:40.015211   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 11:22:40.037706   11080 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0709 11:22:40.037931   11080 command_runner.go:130] > [Unit]
	I0709 11:22:40.037931   11080 command_runner.go:130] > Description=Docker Application Container Engine
	I0709 11:22:40.037931   11080 command_runner.go:130] > Documentation=https://docs.docker.com
	I0709 11:22:40.037931   11080 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0709 11:22:40.037931   11080 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitBurst=3
	I0709 11:22:40.037996   11080 command_runner.go:130] > StartLimitIntervalSec=60
	I0709 11:22:40.037996   11080 command_runner.go:130] > [Service]
	I0709 11:22:40.037996   11080 command_runner.go:130] > Type=notify
	I0709 11:22:40.037996   11080 command_runner.go:130] > Restart=on-failure
	I0709 11:22:40.037996   11080 command_runner.go:130] > Environment=NO_PROXY=172.18.206.134
	I0709 11:22:40.037996   11080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0709 11:22:40.037996   11080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0709 11:22:40.038089   11080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0709 11:22:40.038089   11080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0709 11:22:40.038089   11080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0709 11:22:40.038089   11080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0709 11:22:40.038089   11080 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0709 11:22:40.038158   11080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0709 11:22:40.038158   11080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0709 11:22:40.038158   11080 command_runner.go:130] > ExecStart=
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0709 11:22:40.038260   11080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0709 11:22:40.038260   11080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0709 11:22:40.038260   11080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0709 11:22:40.038260   11080 command_runner.go:130] > LimitNOFILE=infinity
	I0709 11:22:40.038323   11080 command_runner.go:130] > LimitNPROC=infinity
	I0709 11:22:40.038430   11080 command_runner.go:130] > LimitCORE=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0709 11:22:40.038469   11080 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0709 11:22:40.038469   11080 command_runner.go:130] > TasksMax=infinity
	I0709 11:22:40.038469   11080 command_runner.go:130] > TimeoutStartSec=0
	I0709 11:22:40.038532   11080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0709 11:22:40.038566   11080 command_runner.go:130] > Delegate=yes
	I0709 11:22:40.038566   11080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0709 11:22:40.038566   11080 command_runner.go:130] > KillMode=process
	I0709 11:22:40.038566   11080 command_runner.go:130] > [Install]
	I0709 11:22:40.038609   11080 command_runner.go:130] > WantedBy=multi-user.target
	I0709 11:22:40.055979   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.091794   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 11:22:40.154011   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 11:22:40.190664   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.226820   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0709 11:22:40.287595   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 11:22:40.308575   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 11:22:40.342070   11080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0709 11:22:40.354449   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0709 11:22:40.359803   11080 command_runner.go:130] > /usr/bin/cri-dockerd
	I0709 11:22:40.371212   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 11:22:40.388323   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 11:22:40.433437   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 11:22:40.633922   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 11:22:40.820826   11080 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 11:22:40.820826   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 11:22:40.864181   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 11:22:41.057366   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 11:23:42.172852   11080 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0709 11:23:42.172852   11080 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0709 11:23:42.173160   11080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1155866s)
	I0709 11:23:42.185419   11080 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.209631   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	I0709 11:23:42.209703   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	I0709 11:23:42.209762   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0709 11:23:42.209850   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.209899   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.209973   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210054   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210114   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0709 11:23:42.210164   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210722   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210761   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210832   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0709 11:23:42.210900   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.210951   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211024   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211264   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211313   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211380   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211471   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211538   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211574   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0709 11:23:42.211639   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0709 11:23:42.211695   11080 command_runner.go:130] > Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0709 11:23:42.221589   11080 out.go:177] 
	W0709 11:23:42.223827   11080 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 18:22:08 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.410461599Z" level=info msg="Starting up"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.411656933Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 18:22:08 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:08.412688463Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.447732859Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473329186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473455890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473609794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473629495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473705097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.473795699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474058107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474219311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474241412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474255012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474371016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.474743926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478234425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478386330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478567135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478715739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478822242Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.478999847Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504493672Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504558174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504581174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504597375Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504611075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.504735979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505042687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505276694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505301095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505314195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505327696Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505340396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505352596Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505365397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505377897Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505391497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505404098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505415098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505433699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505446399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505457899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505470000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505481900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505501900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505518301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505531101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505543502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505557002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505568302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505579203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505594003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505612104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505631504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505643504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505653505Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505724407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505745807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505942813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505979914Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.505990214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506001915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506069517Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506443527Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506514929Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506562431Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 18:22:08 multinode-849000-m02 dockerd[666]: time="2024-07-09T18:22:08.506600732Z" level=info msg="containerd successfully booted in 0.060314s"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.482479698Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.519851394Z" level=info msg="Loading containers: start."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.679586850Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.907053639Z" level=info msg="Loading containers: done."
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930759646Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 18:22:09 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:09.930946052Z" level=info msg="Daemon has completed initialization"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.040876759Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 18:22:10 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:10.041037362Z" level=info msg="API listen on [::]:2376"
	Jul 09 18:22:10 multinode-849000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.091449660Z" level=info msg="Processing signal 'terminated'"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093232766Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093660567Z" level=info msg="Daemon shutdown complete"
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093747968Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 18:22:41 multinode-849000-m02 dockerd[660]: time="2024-07-09T18:22:41.093768468Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 18:22:41 multinode-849000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 18:22:42 multinode-849000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 18:22:42 multinode-849000-m02 dockerd[1068]: time="2024-07-09T18:22:42.156038351Z" level=info msg="Starting up"
	Jul 09 18:23:42 multinode-849000-m02 dockerd[1068]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 18:23:42 multinode-849000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 11:23:42.223827   11080 out.go:239] * 
	W0709 11:23:42.225718   11080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 11:23:42.228228   11080 out.go:177] 
	
	
	==> Docker <==
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597835991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597891091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597983991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.597776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06d8c6b21616c767448c4be98bae932ed2b404a3dadcf2b2b4b157e8bcf347ea/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:20:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a33ce3348449c0faec48fb58b4574718ba6b78d837824e60579921c71f06d76/resolv.conf as [nameserver 172.18.192.1]"
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968452735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968474235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:08 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:08.968801835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.141801596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.142933705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.143853812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:20:09 multinode-849000 dockerd[1440]: time="2024-07-09T18:20:09.144140014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904534514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904809014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904875715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:17 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:17.904980715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:18 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/216d18e70c2fb87f116d16247afca62184ce070d4aca7bbce19d833808db917c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 09 18:24:19 multinode-849000 cri-dockerd[1330]: time="2024-07-09T18:24:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285320124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285707025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285773326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 18:24:19 multinode-849000 dockerd[1440]: time="2024-07-09T18:24:19.285917526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7a0fcb9e869e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Running             busybox                   0                   216d18e70c2fb       busybox-fc5497c4f-f2j8m
	c150592e658c3       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   2a33ce3348449       coredns-7db6d8ff4d-lzsvc
	37c7b8e14dc9c       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   06d8c6b21616c       storage-provisioner
	f3de6fb5f7f77       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              27 minutes ago      Running             kindnet-cni               0                   668c809456776       kindnet-8ww8c
	02ab9d1727686       53c535741fb44                                                                                         27 minutes ago      Running             kube-proxy                0                   0a60f24294838       kube-proxy-qv64t
	0272c56037c7d       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   2c574be2cc6d3       etcd-multinode-849000
	8661e349d48ab       7820c83aa1394                                                                                         28 minutes ago      Running             kube-scheduler            0                   b9412aa955ab7       kube-scheduler-multinode-849000
	a89ee753e7759       e874818b3caac                                                                                         28 minutes ago      Running             kube-controller-manager   0                   a610e3d24fa06       kube-controller-manager-multinode-849000
	556077ae2825d       56ce0fd9fb532                                                                                         28 minutes ago      Running             kube-apiserver            0                   2035bb8593f0e       kube-apiserver-multinode-849000
	
	
	==> coredns [c150592e658c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = eabdad51eef6fc649fa850c178ba451366b41048c1c621a6be25e706245d9103e597e4866d961c875c300d6a366ff9db50ab3afe55608b789039c53007846ed6
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54651 - 41351 "HINFO IN 6752767091270397564.1917026836058955763. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104932825s
	[INFO] 10.244.0.3:37665 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218301s
	[INFO] 10.244.0.3:33292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.095768808s
	[INFO] 10.244.0.3:51028 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033779908s
	[INFO] 10.244.0.3:52198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.254317433s
	[INFO] 10.244.0.3:58685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001442s
	[INFO] 10.244.0.3:50205 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.085049073s
	[INFO] 10.244.0.3:41462 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002117s
	[INFO] 10.244.0.3:46161 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002965s
	[INFO] 10.244.0.3:40010 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038270523s
	[INFO] 10.244.0.3:50213 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181901s
	[INFO] 10.244.0.3:40333 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208801s
	[INFO] 10.244.0.3:33479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001618s
	[INFO] 10.244.0.3:44590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223001s
	[INFO] 10.244.0.3:58378 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001694s
	[INFO] 10.244.0.3:35676 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.0.3:50088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126901s
	[INFO] 10.244.0.3:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000289801s
	[INFO] 10.244.0.3:33623 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197201s
	[INFO] 10.244.0.3:60126 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001055s
	[INFO] 10.244.0.3:44284 - 5 "PTR IN 1.192.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000150901s
	
	
	==> describe nodes <==
	Name:               multinode-849000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_09T11_19_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:47:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jul 2024 18:45:12 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jul 2024 18:45:12 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jul 2024 18:45:12 +0000   Tue, 09 Jul 2024 18:19:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jul 2024 18:45:12 +0000   Tue, 09 Jul 2024 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.206.134
	  Hostname:    multinode-849000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af90c209c8a84d288c2d79663fa33a94
	  System UUID:                69e31ac5-0527-9e4a-81b6-556c6bac7963
	  Boot ID:                    5c1387e9-724e-4a1c-a3cc-dde77e8449e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-f2j8m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-lzsvc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-849000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-8ww8c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-849000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-multinode-849000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-qv64t                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-849000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node multinode-849000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node multinode-849000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node multinode-849000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                node-controller  Node multinode-849000 event: Registered Node multinode-849000 in Controller
	  Normal  NodeReady                27m                kubelet          Node multinode-849000 status is now: NodeReady
	
	
	Name:               multinode-849000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-849000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=735571997edb61950a92942d429109b921865fd8
	                    minikube.k8s.io/name=multinode-849000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_09T11_40_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jul 2024 18:40:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-849000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jul 2024 18:43:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 09 Jul 2024 18:40:39 +0000   Tue, 09 Jul 2024 18:43:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.18.196.236
	  Hostname:    multinode-849000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 30665cda6be840e19de2d42101ee89bb
	  System UUID:                ddf7b545-8cfa-674d-b55f-fd48f2f9d4f5
	  Boot ID:                    c8391cc6-6aee-4957-ada5-1a481b0a3745
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hjks    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-sn4kd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m31s
	  kube-system                 kube-proxy-wdskl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m31s (x2 over 7m31s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s (x2 over 7m31s)  kubelet          Node multinode-849000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s (x2 over 7m31s)  kubelet          Node multinode-849000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m28s                  node-controller  Node multinode-849000-m03 event: Registered Node multinode-849000-m03 in Controller
	  Normal  NodeReady                7m7s                   kubelet          Node multinode-849000-m03 status is now: NodeReady
	  Normal  NodeNotReady             3m43s                  node-controller  Node multinode-849000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.061894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 9 18:18] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.172355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[Jul 9 18:19] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.106297] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.542997] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.194600] systemd-fstab-generator[1056]: Ignoring "noauto" option for root device
	[  +0.225984] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +2.819794] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.174764] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.183052] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.284648] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[ +10.989764] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
	[  +0.110491] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.025456] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +6.572905] systemd-fstab-generator[1875]: Ignoring "noauto" option for root device
	[  +0.100801] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.070675] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.120083] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.551679] systemd-fstab-generator[2475]: Ignoring "noauto" option for root device
	[  +0.193907] kauditd_printk_skb: 12 callbacks suppressed
	[Jul 9 18:20] kauditd_printk_skb: 51 callbacks suppressed
	[Jul 9 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0272c56037c7] <==
	{"level":"info","ts":"2024-07-09T18:29:37.900644Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2108544045,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-07-09T18:34:37.903933Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-09T18:34:37.912189Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":927,"took":"7.652225ms","hash":1821337612,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:34:37.912513Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1821337612,"revision":927,"compact-revision":687}
	{"level":"info","ts":"2024-07-09T18:35:57.287138Z","caller":"traceutil/trace.go:171","msg":"trace[1176997031] linearizableReadLoop","detail":"{readStateIndex:1442; appliedIndex:1441; }","duration":"158.59851ms","start":"2024-07-09T18:35:57.12852Z","end":"2024-07-09T18:35:57.287118Z","steps":["trace[1176997031] 'read index received'  (duration: 137.916144ms)","trace[1176997031] 'applied index is now lower than readState.Index'  (duration: 20.680866ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:35:57.287544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.000512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-4hjks\" ","response":"range_response_count:1 size:2221"}
	{"level":"info","ts":"2024-07-09T18:35:57.287811Z","caller":"traceutil/trace.go:171","msg":"trace[632773735] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-4hjks; range_end:; response_count:1; response_revision:1233; }","duration":"159.270012ms","start":"2024-07-09T18:35:57.128515Z","end":"2024-07-09T18:35:57.287785Z","steps":["trace[632773735] 'agreement among raft nodes before linearized reading'  (duration: 158.812611ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:37:35.826214Z","caller":"traceutil/trace.go:171","msg":"trace[478726099] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"158.19521ms","start":"2024-07-09T18:37:35.667982Z","end":"2024-07-09T18:37:35.826177Z","steps":["trace[478726099] 'process raft request'  (duration: 158.074409ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:39:37.921147Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1168}
	{"level":"info","ts":"2024-07-09T18:39:37.929404Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1168,"took":"7.948126ms","hash":3253994334,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-09T18:39:37.929571Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3253994334,"revision":1168,"compact-revision":927}
	{"level":"info","ts":"2024-07-09T18:40:13.451954Z","caller":"traceutil/trace.go:171","msg":"trace[1502299339] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"179.100678ms","start":"2024-07-09T18:40:13.272835Z","end":"2024-07-09T18:40:13.451935Z","steps":["trace[1502299339] 'process raft request'  (duration: 178.950978ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:14.005634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.253227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-07-09T18:40:14.005805Z","caller":"traceutil/trace.go:171","msg":"trace[2101599561] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1472; }","duration":"132.404128ms","start":"2024-07-09T18:40:13.873328Z","end":"2024-07-09T18:40:14.005732Z","steps":["trace[2101599561] 'range keys from in-memory index tree'  (duration: 131.983226ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:40:19.670021Z","caller":"traceutil/trace.go:171","msg":"trace[1040829640] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"173.817261ms","start":"2024-07-09T18:40:19.496184Z","end":"2024-07-09T18:40:19.670001Z","steps":["trace[1040829640] 'process raft request'  (duration: 173.61226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-09T18:40:21.061754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.020023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-849000-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-09T18:40:21.061828Z","caller":"traceutil/trace.go:171","msg":"trace[42653553] range","detail":"{range_begin:/registry/minions/multinode-849000-m03; range_end:; response_count:1; response_revision:1481; }","duration":"193.165323ms","start":"2024-07-09T18:40:20.868649Z","end":"2024-07-09T18:40:21.061814Z","steps":["trace[42653553] 'range keys from in-memory index tree'  (duration: 192.928723ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:43:35.498409Z","caller":"traceutil/trace.go:171","msg":"trace[659964785] transaction","detail":"{read_only:false; response_revision:1679; number_of_response:1; }","duration":"247.171591ms","start":"2024-07-09T18:43:35.251216Z","end":"2024-07-09T18:43:35.498388Z","steps":["trace[659964785] 'process raft request'  (duration: 246.984191ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:43:37.157261Z","caller":"traceutil/trace.go:171","msg":"trace[831135192] transaction","detail":"{read_only:false; response_revision:1680; number_of_response:1; }","duration":"116.848632ms","start":"2024-07-09T18:43:37.040393Z","end":"2024-07-09T18:43:37.157241Z","steps":["trace[831135192] 'process raft request'  (duration: 116.710932ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:43:37.461958Z","caller":"traceutil/trace.go:171","msg":"trace[390708889] linearizableReadLoop","detail":"{readStateIndex:1985; appliedIndex:1984; }","duration":"105.267809ms","start":"2024-07-09T18:43:37.356664Z","end":"2024-07-09T18:43:37.461932Z","steps":["trace[390708889] 'read index received'  (duration: 51.503702ms)","trace[390708889] 'applied index is now lower than readState.Index'  (duration: 53.762307ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-09T18:43:37.462236Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.542211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-09T18:43:37.462318Z","caller":"traceutil/trace.go:171","msg":"trace[1756853946] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1680; }","duration":"105.627011ms","start":"2024-07-09T18:43:37.356635Z","end":"2024-07-09T18:43:37.462262Z","steps":["trace[1756853946] 'agreement among raft nodes before linearized reading'  (duration: 105.37421ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-09T18:44:37.954071Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1408}
	{"level":"info","ts":"2024-07-09T18:44:37.962594Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1408,"took":"7.639517ms","hash":1552300792,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1773568,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-07-09T18:44:37.962695Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1552300792,"revision":1408,"compact-revision":1168}
	
	
	==> kernel <==
	 18:47:40 up 30 min,  0 users,  load average: 0.24, 0.34, 0.35
	Linux multinode-849000 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3de6fb5f7f7] <==
	I0709 18:46:38.125567       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:46:48.132436       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:46:48.132477       1 main.go:227] handling current node
	I0709 18:46:48.132490       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:46:48.132495       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:46:58.148664       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:46:58.148709       1 main.go:227] handling current node
	I0709 18:46:58.148722       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:46:58.148728       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:47:08.157837       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:47:08.157926       1 main.go:227] handling current node
	I0709 18:47:08.157940       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:47:08.157946       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:47:18.163547       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:47:18.163627       1 main.go:227] handling current node
	I0709 18:47:18.163643       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:47:18.163648       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:47:28.178690       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:47:28.178792       1 main.go:227] handling current node
	I0709 18:47:28.178808       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:47:28.178815       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	I0709 18:47:38.185007       1 main.go:223] Handling node with IPs: map[172.18.206.134:{}]
	I0709 18:47:38.185168       1 main.go:227] handling current node
	I0709 18:47:38.185182       1 main.go:223] Handling node with IPs: map[172.18.196.236:{}]
	I0709 18:47:38.185189       1 main.go:250] Node multinode-849000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [556077ae2825] <==
	I0709 18:19:39.638553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0709 18:19:39.698240       1 shared_informer.go:320] Caches are synced for configmaps
	I0709 18:19:39.700011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0709 18:19:39.702635       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0709 18:19:39.714433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0709 18:19:40.505081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0709 18:19:40.517142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0709 18:19:40.517305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0709 18:19:41.636583       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0709 18:19:41.706223       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0709 18:19:41.808149       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0709 18:19:41.821195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.206.134]
	I0709 18:19:41.822637       1 controller.go:615] quota admission added evaluator for: endpoints
	I0709 18:19:41.843642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0709 18:19:42.609385       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0709 18:19:42.805564       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0709 18:19:42.871569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0709 18:19:42.907682       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0709 18:19:57.333598       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0709 18:19:57.543081       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0709 18:35:55.870544       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53940: use of closed network connection
	E0709 18:35:56.795209       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53945: use of closed network connection
	E0709 18:35:57.698486       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53950: use of closed network connection
	E0709 18:36:33.178526       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53970: use of closed network connection
	E0709 18:36:43.597768       1 conn.go:339] Error on socket receive: read tcp 172.18.206.134:8443->172.18.192.1:53972: use of closed network connection
	
	
	==> kube-controller-manager [a89ee753e775] <==
	I0709 18:19:57.815368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.660854ms"
	I0709 18:19:57.815916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.6µs"
	I0709 18:19:58.007755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.828816ms"
	I0709 18:19:58.026709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.106923ms"
	I0709 18:19:58.029403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.1µs"
	I0709 18:20:07.977654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.049991ms"
	I0709 18:20:08.015594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111µs"
	I0709 18:20:09.991729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.353168ms"
	I0709 18:20:10.001112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="868.106µs"
	I0709 18:20:11.554561       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0709 18:24:17.420348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.233775ms"
	I0709 18:24:17.441694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.911551ms"
	I0709 18:24:17.444364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.629006ms"
	I0709 18:24:20.165672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.094324ms"
	I0709 18:24:20.166173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.4µs"
	I0709 18:40:08.595141       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-849000-m03\" does not exist"
	I0709 18:40:08.641712       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-849000-m03" podCIDRs=["10.244.1.0/24"]
	I0709 18:40:11.793433       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-849000-m03"
	I0709 18:40:32.591516       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-849000-m03"
	I0709 18:40:32.616362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="263.401µs"
	I0709 18:40:32.638542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.1µs"
	I0709 18:40:35.404984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.084842ms"
	I0709 18:40:35.405359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.3µs"
	I0709 18:43:56.960196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.713036ms"
	I0709 18:43:56.960330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.3µs"
	
	
	==> kube-proxy [02ab9d172768] <==
	I0709 18:19:58.913720       1 server_linux.go:69] "Using iptables proxy"
	I0709 18:19:58.935439       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.206.134"]
	I0709 18:19:59.002175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0709 18:19:59.002345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0709 18:19:59.002422       1 server_linux.go:165] "Using iptables Proxier"
	I0709 18:19:59.006984       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0709 18:19:59.008394       1 server.go:872] "Version info" version="v1.30.2"
	I0709 18:19:59.008567       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0709 18:19:59.012208       1 config.go:192] "Starting service config controller"
	I0709 18:19:59.012230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0709 18:19:59.012257       1 config.go:101] "Starting endpoint slice config controller"
	I0709 18:19:59.012263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0709 18:19:59.014777       1 config.go:319] "Starting node config controller"
	I0709 18:19:59.015900       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0709 18:19:59.113145       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0709 18:19:59.113150       1 shared_informer.go:320] Caches are synced for service config
	I0709 18:19:59.116402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8661e349d48a] <==
	W0709 18:19:40.760717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0709 18:19:40.760830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0709 18:19:40.849864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0709 18:19:40.850245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0709 18:19:40.865437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.865496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.872200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0709 18:19:40.872364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0709 18:19:40.917325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.917365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.931008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0709 18:19:40.931093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0709 18:19:40.976206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0709 18:19:40.976434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0709 18:19:41.005485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0709 18:19:41.005666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0709 18:19:41.019785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0709 18:19:41.020146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0709 18:19:41.110495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0709 18:19:41.110614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0709 18:19:41.120707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0709 18:19:41.122629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0709 18:19:41.253897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0709 18:19:41.254338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0709 18:19:43.553553       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 09 18:42:42 multinode-849000 kubelet[2293]: E0709 18:42:42.972527    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:42:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:43:42 multinode-849000 kubelet[2293]: E0709 18:43:42.974622    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:43:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:44:42 multinode-849000 kubelet[2293]: E0709 18:44:42.980346    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:44:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:45:42 multinode-849000 kubelet[2293]: E0709 18:45:42.971219    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:45:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:45:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:45:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:45:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 09 18:46:42 multinode-849000 kubelet[2293]: E0709 18:46:42.972355    2293 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 09 18:46:42 multinode-849000 kubelet[2293]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 09 18:46:42 multinode-849000 kubelet[2293]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 09 18:46:42 multinode-849000 kubelet[2293]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 09 18:46:42 multinode-849000 kubelet[2293]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 11:47:32.153171    7636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-849000 -n multinode-849000: (11.8385429s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-849000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (169.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (1430.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m38.1292167s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-715200
E0709 12:10:30.115299   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-715200: (34.3795916s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-715200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-715200 status --format={{.Host}}: exit status 7 (2.4528135s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:10:42.236024   14872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=hyperv: (7m59.10756s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-715200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (215.0887ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-715200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:18:44.008563    4056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-715200
	    minikube start -p kubernetes-upgrade-715200 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7152002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-715200 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (6m17.1368632s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-715200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-715200" primary control-plane node in "kubernetes-upgrade-715200" cluster
	* Updating the running hyperv "kubernetes-upgrade-715200" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:18:44.244567   10600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 12:18:44.246567   10600 out.go:291] Setting OutFile to fd 1416 ...
	I0709 12:18:44.247569   10600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 12:18:44.247569   10600 out.go:304] Setting ErrFile to fd 1684...
	I0709 12:18:44.247569   10600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 12:18:44.284924   10600 out.go:298] Setting JSON to false
	I0709 12:18:44.290202   10600 start.go:129] hostinfo: {"hostname":"minikube1","uptime":10992,"bootTime":1720541731,"procs":210,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 12:18:44.290202   10600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 12:18:44.295198   10600 out.go:177] * [kubernetes-upgrade-715200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 12:18:44.301195   10600 notify.go:220] Checking for updates...
	I0709 12:18:44.303207   10600 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 12:18:44.309211   10600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 12:18:44.315206   10600 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 12:18:44.323204   10600 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 12:18:44.328204   10600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 12:18:44.333203   10600 config.go:182] Loaded profile config "kubernetes-upgrade-715200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 12:18:44.334214   10600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 12:18:50.538541   10600 out.go:177] * Using the hyperv driver based on existing profile
	I0709 12:18:50.542282   10600 start.go:297] selected driver: hyperv
	I0709 12:18:50.542282   10600 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-715200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-715200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.145 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 12:18:50.543053   10600 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 12:18:50.603582   10600 cni.go:84] Creating CNI manager for ""
	I0709 12:18:50.603582   10600 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 12:18:50.603582   10600 start.go:340] cluster config:
	{Name:kubernetes-upgrade-715200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubernetes-upgrade-715200 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.204.145 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 12:18:50.604698   10600 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 12:18:50.610598   10600 out.go:177] * Starting "kubernetes-upgrade-715200" primary control-plane node in "kubernetes-upgrade-715200" cluster
	I0709 12:18:50.613085   10600 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 12:18:50.613085   10600 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 12:18:50.613085   10600 cache.go:56] Caching tarball of preloaded images
	I0709 12:18:50.614380   10600 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 12:18:50.614513   10600 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0709 12:18:50.614739   10600 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-715200\config.json ...
	I0709 12:18:50.617540   10600 start.go:360] acquireMachinesLock for kubernetes-upgrade-715200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 12:22:29.764752   10600 start.go:364] duration metric: took 3m39.1463788s to acquireMachinesLock for "kubernetes-upgrade-715200"
	I0709 12:22:29.766563   10600 start.go:96] Skipping create...Using existing machine configuration
	I0709 12:22:29.766563   10600 fix.go:54] fixHost starting: 
	I0709 12:22:29.768640   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:22:32.005469   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:22:32.005553   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:32.005553   10600 fix.go:112] recreateIfNeeded on kubernetes-upgrade-715200: state=Running err=<nil>
	W0709 12:22:32.005636   10600 fix.go:138] unexpected machine state, will restart: <nil>
	I0709 12:22:32.008745   10600 out.go:177] * Updating the running hyperv "kubernetes-upgrade-715200" VM ...
	I0709 12:22:32.012262   10600 machine.go:94] provisionDockerMachine start ...
	I0709 12:22:32.012915   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:22:34.319994   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:22:34.319994   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:34.320575   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:22:37.100817   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:22:37.100817   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:37.106830   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:22:37.107467   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:22:37.107467   10600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 12:22:37.249980   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-715200
	
	I0709 12:22:37.249980   10600 buildroot.go:166] provisioning hostname "kubernetes-upgrade-715200"
	I0709 12:22:37.250128   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:22:39.677379   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:22:39.677617   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:39.677759   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:22:42.479335   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:22:42.479335   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:42.483336   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:22:42.484334   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:22:42.484334   10600 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-715200 && echo "kubernetes-upgrade-715200" | sudo tee /etc/hostname
	I0709 12:22:42.666662   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-715200
	
	I0709 12:22:42.666804   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:22:45.014978   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:22:45.015777   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:45.015852   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:22:47.723838   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:22:47.723951   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:47.735314   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:22:47.735314   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:22:47.735314   10600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-715200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-715200/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-715200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 12:22:47.878453   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 12:22:47.878453   10600 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 12:22:47.878453   10600 buildroot.go:174] setting up certificates
	I0709 12:22:47.878453   10600 provision.go:84] configureAuth start
	I0709 12:22:47.878453   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:22:50.148950   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:22:50.149523   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:50.149625   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:22:52.866975   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:22:52.866975   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:52.868054   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:22:55.111053   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:22:55.111053   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:55.111284   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:22:57.808158   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:22:57.809131   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:22:57.809196   10600 provision.go:143] copyHostCerts
	I0709 12:22:57.809701   10600 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 12:22:57.809701   10600 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 12:22:57.809890   10600 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 12:22:57.811550   10600 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 12:22:57.811602   10600 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 12:22:57.812032   10600 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 12:22:57.813633   10600 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 12:22:57.813633   10600 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 12:22:57.813923   10600 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 12:22:57.815072   10600 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-715200 san=[127.0.0.1 172.18.204.145 kubernetes-upgrade-715200 localhost minikube]
	I0709 12:22:57.972309   10600 provision.go:177] copyRemoteCerts
	I0709 12:22:57.984595   10600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 12:22:57.984595   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:00.306197   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:00.306197   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:00.306197   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:02.979844   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:02.980056   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:02.980334   10600 sshutil.go:53] new ssh client: &{IP:172.18.204.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-715200\id_rsa Username:docker}
	I0709 12:23:03.097722   10600 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.113007s)
	I0709 12:23:03.097913   10600 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 12:23:03.145602   10600 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0709 12:23:03.198081   10600 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 12:23:03.246695   10600 provision.go:87] duration metric: took 15.368183s to configureAuth
	I0709 12:23:03.246695   10600 buildroot.go:189] setting minikube options for container-runtime
	I0709 12:23:03.246695   10600 config.go:182] Loaded profile config "kubernetes-upgrade-715200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 12:23:03.247694   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:05.502784   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:05.502784   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:05.503698   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:08.144991   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:08.144991   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:08.151810   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:08.152264   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:23:08.152264   10600 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 12:23:08.284386   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 12:23:08.284474   10600 buildroot.go:70] root file system type: tmpfs
	I0709 12:23:08.284805   10600 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 12:23:08.284805   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:10.490240   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:10.490240   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:10.490391   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:13.180217   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:13.180217   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:13.186739   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:13.187347   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:23:13.187347   10600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 12:23:13.351327   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 12:23:13.351438   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:15.636968   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:15.636968   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:15.636968   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:18.316467   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:18.316467   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:18.324225   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:18.324960   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:23:18.324960   10600 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 12:23:18.472451   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 12:23:18.472525   10600 machine.go:97] duration metric: took 46.4600866s to provisionDockerMachine
	I0709 12:23:18.472525   10600 start.go:293] postStartSetup for "kubernetes-upgrade-715200" (driver="hyperv")
	I0709 12:23:18.472525   10600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 12:23:18.486354   10600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 12:23:18.486354   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:20.732622   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:20.732622   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:20.733518   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:23.453749   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:23.453749   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:23.454555   10600 sshutil.go:53] new ssh client: &{IP:172.18.204.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-715200\id_rsa Username:docker}
	I0709 12:23:23.566517   10600 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0801436s)
	I0709 12:23:23.578515   10600 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 12:23:23.586796   10600 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 12:23:23.586863   10600 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 12:23:23.587340   10600 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 12:23:23.588423   10600 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 12:23:23.600611   10600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 12:23:23.622097   10600 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 12:23:23.678147   10600 start.go:296] duration metric: took 5.2056014s for postStartSetup
	I0709 12:23:23.678284   10600 fix.go:56] duration metric: took 53.911516s for fixHost
	I0709 12:23:23.678459   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:26.099918   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:26.099918   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:26.099918   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:28.979253   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:28.979253   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:28.987594   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:28.987594   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:23:28.987594   10600 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0709 12:23:29.142515   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720553009.141417914
	
	I0709 12:23:29.142515   10600 fix.go:216] guest clock: 1720553009.141417914
	I0709 12:23:29.142515   10600 fix.go:229] Guest: 2024-07-09 12:23:29.141417914 -0700 PDT Remote: 2024-07-09 12:23:23.6782845 -0700 PDT m=+279.548694301 (delta=5.463133414s)
	I0709 12:23:29.142627   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:31.556874   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:31.556944   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:31.557029   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:34.352681   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:34.352968   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:34.365562   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:34.366315   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:23:34.366315   10600 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720553009
	I0709 12:23:34.519742   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 19:23:29 UTC 2024
	
	I0709 12:23:34.519834   10600 fix.go:236] clock set: Tue Jul  9 19:23:29 UTC 2024
	 (err=<nil>)
	I0709 12:23:34.519834   10600 start.go:83] releasing machines lock for "kubernetes-upgrade-715200", held for 1m4.7548357s
	I0709 12:23:34.520096   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:37.036053   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:37.036549   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:37.036667   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:40.114401   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:40.115277   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:40.123166   10600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 12:23:40.123166   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:40.139607   10600 ssh_runner.go:195] Run: cat /version.json
	I0709 12:23:40.139607   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:42.654280   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:42.654357   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:42.654357   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:42.693568   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:42.693568   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:42.693568   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:45.443820   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:45.444492   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:45.444887   10600 sshutil.go:53] new ssh client: &{IP:172.18.204.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-715200\id_rsa Username:docker}
	I0709 12:23:45.475254   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:45.475254   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:45.475552   10600 sshutil.go:53] new ssh client: &{IP:172.18.204.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-715200\id_rsa Username:docker}
	I0709 12:23:45.552995   10600 ssh_runner.go:235] Completed: cat /version.json: (5.4133674s)
	I0709 12:23:45.566507   10600 ssh_runner.go:195] Run: systemctl --version
	I0709 12:23:47.581783   10600 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.457758s)
	I0709 12:23:47.581783   10600 ssh_runner.go:235] Completed: systemctl --version: (2.0152685s)
	W0709 12:23:47.581933   10600 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0709 12:23:47.582088   10600 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0709 12:23:47.582088   10600 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0709 12:23:47.596626   10600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0709 12:23:47.606753   10600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 12:23:47.618822   10600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0709 12:23:47.648384   10600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0709 12:23:47.681627   10600 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 12:23:47.681627   10600 start.go:494] detecting cgroup driver to use...
	I0709 12:23:47.681627   10600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 12:23:47.729245   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 12:23:47.763032   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 12:23:47.784949   10600 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 12:23:47.796951   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 12:23:47.831328   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 12:23:47.863612   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 12:23:47.903944   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 12:23:47.935644   10600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 12:23:47.967034   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 12:23:47.999038   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 12:23:48.038934   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 12:23:48.086785   10600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 12:23:48.123564   10600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 12:23:48.160873   10600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:23:48.454577   10600 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 12:23:48.496694   10600 start.go:494] detecting cgroup driver to use...
	I0709 12:23:48.511045   10600 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 12:23:48.562196   10600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 12:23:48.601773   10600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 12:23:48.652173   10600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 12:23:48.693124   10600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 12:23:48.719323   10600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 12:23:48.770012   10600 ssh_runner.go:195] Run: which cri-dockerd
	I0709 12:23:48.788361   10600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 12:23:48.808454   10600 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 12:23:48.861431   10600 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 12:23:49.145115   10600 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 12:23:49.413707   10600 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 12:23:49.414030   10600 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 12:23:49.467127   10600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:23:49.739272   10600 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 12:25:01.121955   10600 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3824175s)
	I0709 12:25:01.135787   10600 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 12:25:01.206450   10600 out.go:177] 
	W0709 12:25:01.210298   10600 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 19:17:29 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.966775003Z" level=info msg="Starting up"
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.968062977Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.969722044Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.009364370Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039392812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039626308Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039769105Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039843004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.040851685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.040999982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041266377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041633470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041659670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041673970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.042219260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.043067444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046474580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046577479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046747375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046788075Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.047356964Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.047418463Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.051727483Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.051965878Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052062277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052190474Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052214074Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052389871Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052662365Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052874662Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053017159Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053061358Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053078858Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053096657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053111857Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053131357Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053148256Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053163656Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053177956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053192056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053215655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053248955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053262954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053278454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053291554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053348653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053365352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053379852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053397452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053414252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053428651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053442351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053456051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053572349Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053610948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053628648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053659947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053989241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054120038Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054142138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054157738Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054170037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054186037Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054198537Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054650529Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054795626Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054857325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054879524Z" level=info msg="containerd successfully booted in 0.049404s"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.029698846Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.181028910Z" level=info msg="Loading containers: start."
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.580056459Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.727604493Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.826843234Z" level=info msg="Loading containers: done."
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.853540484Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.854533278Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.910034765Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.910362664Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:17:31 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:17:59 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.082648121Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084436643Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084934748Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084980249Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.085009149Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.157745091Z" level=info msg="Starting up"
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.159324709Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.162799450Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1180
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.203422329Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.234924400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235054402Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235131303Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235165203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235228804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235259804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235664309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235834711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235873711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236081614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236308816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236695221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.240766869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.240824470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241044172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241149373Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241186174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241208074Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242317487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242403188Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242430989Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242456789Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242482889Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242709492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244334011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244628715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244983819Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245208321Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245611726Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245701627Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245794028Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245856729Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245981330Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246070131Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246105732Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246131832Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246169833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246227233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246244934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246258634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246273034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246286334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246299934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246314034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246331135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246347735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246360435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246373035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246386535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246405635Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246429436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246449736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246465136Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246578937Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246650138Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246665138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246679139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246692439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246706839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246718439Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247096144Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247174645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247227345Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247256445Z" level=info msg="containerd successfully booted in 0.045735s"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.206803953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.242228370Z" level=info msg="Loading containers: start."
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.531765582Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.649880674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.747493425Z" level=info msg="Loading containers: done."
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.773254728Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.773336529Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.824065227Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.824183528Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:18:01 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.774201534Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:18:14 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.775592051Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.775940455Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.776005356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.776028456Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.850130313Z" level=info msg="Starting up"
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.851633531Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.853362351Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1647
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.891670303Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920238439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920290340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920337041Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920354141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920419742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920441142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920948648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921045949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921079549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921094050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921137950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921669656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.924541690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.924683792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925030296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925138597Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925174198Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925205498Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925458801Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925522902Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925544702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925561702Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925578402Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925653503Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926076108Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926220310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926241410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926256610Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926273411Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926288911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926303411Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926319511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926335911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926350511Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926364212Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926376612Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926398612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926414112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926428712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926443513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926464413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926494213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926507613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926523013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926541814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926559514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926572214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926585614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926598514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926615215Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926637315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926651315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926664415Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926733016Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926764916Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926788717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926927118Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926949319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926964819Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926977319Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927233822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927469725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927551226Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927592926Z" level=info msg="containerd successfully booted in 0.038342s"
	Jul 09 19:18:16 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:16.900369790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:18:17 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:17.862515628Z" level=info msg="Loading containers: start."
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.130108181Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.248362375Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.348426754Z" level=info msg="Loading containers: done."
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.373377548Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.373530250Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.425394261Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.425552663Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:18:18 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.650753418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.651190420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.651213630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.654037335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772544494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772735982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772778602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772880749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.834567553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.840737704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.841115578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.842271212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903201366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903270098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903285005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903377548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.053522345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.054271570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.054440943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.055093726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.272716661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.275791996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.276076019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.276420969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.423097017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.423275694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.429780017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.430225310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493079585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493217245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493250059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493431038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.460647619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.461853702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.462263732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.467682453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559414882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559511713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559524517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559619947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.571996677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.572478830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.572711104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.573665507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.508476793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.509241115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.512646505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.514100928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.605366956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.610946078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611019699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611505240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611673489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.614399782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.618975812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.623421704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:42.797691178Z" level=info msg="ignoring event" container=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800314414Z" level=info msg="shim disconnected" id=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800755553Z" level=warning msg="cleaning up after shim disconnected" id=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800881036Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:42.966610764Z" level=info msg="ignoring event" container=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.967549334Z" level=info msg="shim disconnected" id=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.969217003Z" level=warning msg="cleaning up after shim disconnected" id=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.969364283Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480480108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480663283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480679181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.481408581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084528363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084628965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084650166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.085508091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399448146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399703353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399753555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.400076264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.546749148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.547357565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.547489769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.548044785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:48.083380401Z" level=info msg="ignoring event" container=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084430488Z" level=info msg="shim disconnected" id=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084650586Z" level=warning msg="cleaning up after shim disconnected" id=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084672385Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.111624250Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:18:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:48.274455627Z" level=info msg="ignoring event" container=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.274737724Z" level=info msg="shim disconnected" id=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.275200318Z" level=warning msg="cleaning up after shim disconnected" id=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.275332816Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:19:01.883153039Z" level=info msg="ignoring event" container=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883398143Z" level=info msg="shim disconnected" id=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883459044Z" level=warning msg="cleaning up after shim disconnected" id=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883468344Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033836975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033970377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033986477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.034766790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:20:41.643665911Z" level=info msg="ignoring event" container=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.644425122Z" level=info msg="shim disconnected" id=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.647429764Z" level=warning msg="cleaning up after shim disconnected" id=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.647513265Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858363355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858577758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858601459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858975864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245264804Z" level=info msg="shim disconnected" id=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245426406Z" level=warning msg="cleaning up after shim disconnected" id=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245440906Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:22:33.247542631Z" level=info msg="ignoring event" container=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.472994490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473078191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473092592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473631998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:49.770756179Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:23:49 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.970089135Z" level=info msg="shim disconnected" id=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e namespace=moby
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.970898038Z" level=warning msg="cleaning up after shim disconnected" id=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e namespace=moby
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:49.971512341Z" level=info msg="ignoring event" container=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.972001343Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.014132324Z" level=info msg="ignoring event" container=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015002828Z" level=info msg="shim disconnected" id=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015074628Z" level=warning msg="cleaning up after shim disconnected" id=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015091128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.042828347Z" level=info msg="ignoring event" container=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.045966761Z" level=info msg="shim disconnected" id=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.048715572Z" level=warning msg="cleaning up after shim disconnected" id=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.048854873Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.051459484Z" level=info msg="ignoring event" container=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053463093Z" level=info msg="shim disconnected" id=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053586193Z" level=warning msg="cleaning up after shim disconnected" id=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053802094Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093342764Z" level=info msg="shim disconnected" id=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093431664Z" level=warning msg="cleaning up after shim disconnected" id=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093453464Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.100765696Z" level=info msg="ignoring event" container=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.100857696Z" level=info msg="ignoring event" container=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.100613095Z" level=info msg="shim disconnected" id=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.102771604Z" level=warning msg="cleaning up after shim disconnected" id=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.112279745Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113293350Z" level=info msg="shim disconnected" id=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113399450Z" level=warning msg="cleaning up after shim disconnected" id=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113482550Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.112231145Z" level=info msg="shim disconnected" id=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.117674268Z" level=warning msg="cleaning up after shim disconnected" id=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.117745869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.119850678Z" level=info msg="shim disconnected" id=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.120075879Z" level=warning msg="cleaning up after shim disconnected" id=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.120293480Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123305193Z" level=info msg="shim disconnected" id=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123362093Z" level=warning msg="cleaning up after shim disconnected" id=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123374593Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.123762395Z" level=info msg="ignoring event" container=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124251197Z" level=info msg="ignoring event" container=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124398797Z" level=info msg="ignoring event" container=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124757399Z" level=info msg="ignoring event" container=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.132488832Z" level=info msg="shim disconnected" id=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.132918834Z" level=info msg="ignoring event" container=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.133509736Z" level=warning msg="cleaning up after shim disconnected" id=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.133727737Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.184719556Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:23:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303317466Z" level=info msg="shim disconnected" id=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303376966Z" level=warning msg="cleaning up after shim disconnected" id=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303388466Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.307043182Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:23:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.308132086Z" level=info msg="ignoring event" container=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.843491962Z" level=info msg="shim disconnected" id=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.843849264Z" level=warning msg="cleaning up after shim disconnected" id=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.844061564Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:54.847986081Z" level=info msg="ignoring event" container=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:59.907304255Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:59.948124666Z" level=info msg="ignoring event" container=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.948871575Z" level=info msg="shim disconnected" id=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 namespace=moby
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.948980177Z" level=warning msg="cleaning up after shim disconnected" id=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 namespace=moby
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.949010977Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.017412625Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018166734Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018335036Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018348536Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Consumed 11.555s CPU time.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:24:01 kubernetes-upgrade-715200 dockerd[5468]: time="2024-07-09T19:24:01.094168712Z" level=info msg="Starting up"
	Jul 09 19:25:01 kubernetes-upgrade-715200 dockerd[5468]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 19:17:29 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.966775003Z" level=info msg="Starting up"
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.968062977Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.969722044Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.009364370Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039392812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039626308Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039769105Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039843004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.040851685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.040999982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041266377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041633470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041659670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041673970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.042219260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.043067444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046474580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046577479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046747375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046788075Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.047356964Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.047418463Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.051727483Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.051965878Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052062277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052190474Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052214074Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052389871Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052662365Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052874662Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053017159Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053061358Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053078858Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053096657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053111857Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053131357Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053148256Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053163656Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053177956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053192056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053215655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053248955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053262954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053278454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053291554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053348653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053365352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053379852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053397452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053414252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053428651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053442351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053456051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053572349Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053610948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053628648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053659947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053989241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054120038Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054142138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054157738Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054170037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054186037Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054198537Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054650529Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054795626Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054857325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054879524Z" level=info msg="containerd successfully booted in 0.049404s"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.029698846Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.181028910Z" level=info msg="Loading containers: start."
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.580056459Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.727604493Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.826843234Z" level=info msg="Loading containers: done."
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.853540484Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.854533278Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.910034765Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.910362664Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:17:31 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:17:59 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.082648121Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084436643Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084934748Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084980249Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.085009149Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.157745091Z" level=info msg="Starting up"
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.159324709Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.162799450Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1180
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.203422329Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.234924400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235054402Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235131303Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235165203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235228804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235259804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235664309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235834711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235873711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236081614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236308816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236695221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.240766869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.240824470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241044172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241149373Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241186174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241208074Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242317487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242403188Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242430989Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242456789Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242482889Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242709492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244334011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244628715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244983819Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245208321Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245611726Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245701627Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245794028Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245856729Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245981330Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246070131Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246105732Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246131832Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246169833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246227233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246244934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246258634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246273034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246286334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246299934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246314034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246331135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246347735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246360435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246373035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246386535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246405635Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246429436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246449736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246465136Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246578937Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246650138Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246665138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246679139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246692439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246706839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246718439Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247096144Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247174645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247227345Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247256445Z" level=info msg="containerd successfully booted in 0.045735s"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.206803953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.242228370Z" level=info msg="Loading containers: start."
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.531765582Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.649880674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.747493425Z" level=info msg="Loading containers: done."
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.773254728Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.773336529Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.824065227Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.824183528Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:18:01 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.774201534Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:18:14 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.775592051Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.775940455Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.776005356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.776028456Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.850130313Z" level=info msg="Starting up"
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.851633531Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.853362351Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1647
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.891670303Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920238439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920290340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920337041Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920354141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920419742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920441142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920948648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921045949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921079549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921094050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921137950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921669656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.924541690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.924683792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925030296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925138597Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925174198Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925205498Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925458801Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925522902Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925544702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925561702Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925578402Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925653503Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926076108Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926220310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926241410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926256610Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926273411Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926288911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926303411Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926319511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926335911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926350511Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926364212Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926376612Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926398612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926414112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926428712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926443513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926464413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926494213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926507613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926523013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926541814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926559514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926572214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926585614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926598514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926615215Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926637315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926651315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926664415Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926733016Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926764916Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926788717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926927118Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926949319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926964819Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926977319Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927233822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927469725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927551226Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927592926Z" level=info msg="containerd successfully booted in 0.038342s"
	Jul 09 19:18:16 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:16.900369790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:18:17 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:17.862515628Z" level=info msg="Loading containers: start."
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.130108181Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.248362375Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.348426754Z" level=info msg="Loading containers: done."
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.373377548Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.373530250Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.425394261Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.425552663Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:18:18 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.650753418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.651190420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.651213630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.654037335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772544494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772735982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772778602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772880749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.834567553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.840737704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.841115578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.842271212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903201366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903270098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903285005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903377548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.053522345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.054271570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.054440943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.055093726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.272716661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.275791996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.276076019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.276420969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.423097017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.423275694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.429780017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.430225310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493079585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493217245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493250059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493431038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.460647619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.461853702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.462263732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.467682453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559414882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559511713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559524517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559619947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.571996677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.572478830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.572711104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.573665507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.508476793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.509241115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.512646505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.514100928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.605366956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.610946078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611019699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611505240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611673489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.614399782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.618975812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.623421704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:42.797691178Z" level=info msg="ignoring event" container=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800314414Z" level=info msg="shim disconnected" id=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800755553Z" level=warning msg="cleaning up after shim disconnected" id=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800881036Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:42.966610764Z" level=info msg="ignoring event" container=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.967549334Z" level=info msg="shim disconnected" id=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.969217003Z" level=warning msg="cleaning up after shim disconnected" id=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.969364283Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480480108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480663283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480679181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.481408581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084528363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084628965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084650166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.085508091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399448146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399703353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399753555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.400076264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.546749148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.547357565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.547489769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.548044785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:48.083380401Z" level=info msg="ignoring event" container=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084430488Z" level=info msg="shim disconnected" id=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084650586Z" level=warning msg="cleaning up after shim disconnected" id=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084672385Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.111624250Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:18:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:48.274455627Z" level=info msg="ignoring event" container=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.274737724Z" level=info msg="shim disconnected" id=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.275200318Z" level=warning msg="cleaning up after shim disconnected" id=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.275332816Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:19:01.883153039Z" level=info msg="ignoring event" container=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883398143Z" level=info msg="shim disconnected" id=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883459044Z" level=warning msg="cleaning up after shim disconnected" id=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883468344Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033836975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033970377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033986477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.034766790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:20:41.643665911Z" level=info msg="ignoring event" container=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.644425122Z" level=info msg="shim disconnected" id=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.647429764Z" level=warning msg="cleaning up after shim disconnected" id=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.647513265Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858363355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858577758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858601459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858975864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245264804Z" level=info msg="shim disconnected" id=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245426406Z" level=warning msg="cleaning up after shim disconnected" id=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245440906Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:22:33.247542631Z" level=info msg="ignoring event" container=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.472994490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473078191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473092592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473631998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:49.770756179Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:23:49 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.970089135Z" level=info msg="shim disconnected" id=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e namespace=moby
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.970898038Z" level=warning msg="cleaning up after shim disconnected" id=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e namespace=moby
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:49.971512341Z" level=info msg="ignoring event" container=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.972001343Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.014132324Z" level=info msg="ignoring event" container=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015002828Z" level=info msg="shim disconnected" id=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015074628Z" level=warning msg="cleaning up after shim disconnected" id=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015091128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.042828347Z" level=info msg="ignoring event" container=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.045966761Z" level=info msg="shim disconnected" id=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.048715572Z" level=warning msg="cleaning up after shim disconnected" id=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.048854873Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.051459484Z" level=info msg="ignoring event" container=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053463093Z" level=info msg="shim disconnected" id=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053586193Z" level=warning msg="cleaning up after shim disconnected" id=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053802094Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093342764Z" level=info msg="shim disconnected" id=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093431664Z" level=warning msg="cleaning up after shim disconnected" id=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093453464Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.100765696Z" level=info msg="ignoring event" container=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.100857696Z" level=info msg="ignoring event" container=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.100613095Z" level=info msg="shim disconnected" id=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.102771604Z" level=warning msg="cleaning up after shim disconnected" id=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.112279745Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113293350Z" level=info msg="shim disconnected" id=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113399450Z" level=warning msg="cleaning up after shim disconnected" id=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113482550Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.112231145Z" level=info msg="shim disconnected" id=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.117674268Z" level=warning msg="cleaning up after shim disconnected" id=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.117745869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.119850678Z" level=info msg="shim disconnected" id=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.120075879Z" level=warning msg="cleaning up after shim disconnected" id=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.120293480Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123305193Z" level=info msg="shim disconnected" id=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123362093Z" level=warning msg="cleaning up after shim disconnected" id=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123374593Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.123762395Z" level=info msg="ignoring event" container=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124251197Z" level=info msg="ignoring event" container=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124398797Z" level=info msg="ignoring event" container=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124757399Z" level=info msg="ignoring event" container=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.132488832Z" level=info msg="shim disconnected" id=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.132918834Z" level=info msg="ignoring event" container=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.133509736Z" level=warning msg="cleaning up after shim disconnected" id=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.133727737Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.184719556Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:23:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303317466Z" level=info msg="shim disconnected" id=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303376966Z" level=warning msg="cleaning up after shim disconnected" id=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303388466Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.307043182Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:23:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.308132086Z" level=info msg="ignoring event" container=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.843491962Z" level=info msg="shim disconnected" id=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.843849264Z" level=warning msg="cleaning up after shim disconnected" id=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.844061564Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:54.847986081Z" level=info msg="ignoring event" container=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:59.907304255Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:59.948124666Z" level=info msg="ignoring event" container=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.948871575Z" level=info msg="shim disconnected" id=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 namespace=moby
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.948980177Z" level=warning msg="cleaning up after shim disconnected" id=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 namespace=moby
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.949010977Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.017412625Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018166734Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018335036Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018348536Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Consumed 11.555s CPU time.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:24:01 kubernetes-upgrade-715200 dockerd[5468]: time="2024-07-09T19:24:01.094168712Z" level=info msg="Starting up"
	Jul 09 19:25:01 kubernetes-upgrade-715200 dockerd[5468]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 12:25:01.213682   10600 out.go:239] * 
	* 
	W0709 12:25:01.215371   10600 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 12:25:01.221924   10600 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-715200 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=hyperv: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-09 12:25:01.689807 -0700 PDT m=+10058.449136501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-715200 -n kubernetes-upgrade-715200
E0709 12:25:13.367778   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-715200 -n kubernetes-upgrade-715200: exit status 2 (13.0780074s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:25:01.815515    9496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-715200 logs -n 25
E0709 12:25:30.117473   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-715200 logs -n 25: (1m47.2375701s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args               |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-254600 sudo cat        | cilium-254600             | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT |                     |
	|         | /etc/containerd/config.toml      |                           |                   |         |                     |                     |
	| ssh     | -p cilium-254600 sudo            | cilium-254600             | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT |                     |
	|         | containerd config dump           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-254600 sudo            | cilium-254600             | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT |                     |
	|         | systemctl status crio --all      |                           |                   |         |                     |                     |
	|         | --full --no-pager                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-254600 sudo            | cilium-254600             | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT |                     |
	|         | systemctl cat crio --no-pager    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-254600 sudo find       | cilium-254600             | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT |                     |
	|         | /etc/crio -type f -exec sh -c    |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;             |                           |                   |         |                     |                     |
	| ssh     | -p cilium-254600 sudo crio       | cilium-254600             | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT |                     |
	|         | config                           |                           |                   |         |                     |                     |
	| delete  | -p cilium-254600                 | cilium-254600             | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT | 09 Jul 24 12:08 PDT |
	| start   | -p force-systemd-env-881500      | force-systemd-env-881500  | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:08 PDT | 09 Jul 24 12:14 PDT |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-492900           | NoKubernetes-492900       | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:09 PDT | 09 Jul 24 12:09 PDT |
	| start   | -p cert-expiration-206200        | cert-expiration-206200    | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:09 PDT | 09 Jul 24 12:17 PDT |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m             |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-715200     | kubernetes-upgrade-715200 | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:10 PDT | 09 Jul 24 12:10 PDT |
	| start   | -p kubernetes-upgrade-715200     | kubernetes-upgrade-715200 | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:10 PDT | 09 Jul 24 12:18 PDT |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-426900         | offline-docker-426900     | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:12 PDT | 09 Jul 24 12:13 PDT |
	| start   | -p docker-flags-247100           | docker-flags-247100       | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:13 PDT | 09 Jul 24 12:21 PDT |
	|         | --cache-images=false             |                           |                   |         |                     |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --install-addons=false           |                           |                   |         |                     |                     |
	|         | --wait=false                     |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR             |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT             |                           |                   |         |                     |                     |
	|         | --docker-opt=debug               |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true            |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-881500         | force-systemd-env-881500  | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:14 PDT | 09 Jul 24 12:14 PDT |
	|         | ssh docker info --format         |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-881500      | force-systemd-env-881500  | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:14 PDT | 09 Jul 24 12:16 PDT |
	| start   | -p running-upgrade-745900        | minikube                  | minikube1\jenkins | v1.26.0 | 09 Jul 24 12:16 PDT | 09 Jul 24 12:23 PDT |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv               |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-715200     | kubernetes-upgrade-715200 | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:18 PDT |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0     |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-715200     | kubernetes-upgrade-715200 | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:18 PDT |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p cert-expiration-206200        | cert-expiration-206200    | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:20 PDT |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h          |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | docker-flags-247100 ssh          | docker-flags-247100       | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:21 PDT | 09 Jul 24 12:21 PDT |
	|         | sudo systemctl show docker       |                           |                   |         |                     |                     |
	|         | --property=Environment           |                           |                   |         |                     |                     |
	|         | --no-pager                       |                           |                   |         |                     |                     |
	| ssh     | docker-flags-247100 ssh          | docker-flags-247100       | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:21 PDT | 09 Jul 24 12:21 PDT |
	|         | sudo systemctl show docker       |                           |                   |         |                     |                     |
	|         | --property=ExecStart             |                           |                   |         |                     |                     |
	|         | --no-pager                       |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-247100           | docker-flags-247100       | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:21 PDT | 09 Jul 24 12:22 PDT |
	| start   | -p cert-options-402400           | cert-options-402400       | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:22 PDT |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1        |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15    |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost      |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-745900        | running-upgrade-745900    | minikube1\jenkins | v1.33.1 | 09 Jul 24 12:23 PDT |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 12:23:36
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 12:23:36.130043    3584 out.go:291] Setting OutFile to fd 1996 ...
	I0709 12:23:36.130043    3584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 12:23:36.130845    3584 out.go:304] Setting ErrFile to fd 1080...
	I0709 12:23:36.130845    3584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 12:23:36.155308    3584 out.go:298] Setting JSON to false
	I0709 12:23:36.160313    3584 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11284,"bootTime":1720541731,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 12:23:36.160595    3584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 12:23:36.165530    3584 out.go:177] * [running-upgrade-745900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 12:23:36.169907    3584 notify.go:220] Checking for updates...
	I0709 12:23:36.173910    3584 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 12:23:36.178573    3584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 12:23:36.180252    3584 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 12:23:36.183499    3584 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 12:23:36.190490    3584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 12:23:36.194472    3584 config.go:182] Loaded profile config "running-upgrade-745900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0709 12:23:36.197476    3584 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0709 12:23:34.520329   14920 start.go:364] duration metric: took 3m16.5546174s to acquireMachinesLock for "cert-expiration-206200"
	I0709 12:23:34.520626   14920 start.go:96] Skipping create...Using existing machine configuration
	I0709 12:23:34.520626   14920 fix.go:54] fixHost starting: 
	I0709 12:23:34.521622   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:23:37.072799   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:37.072799   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:37.072799   14920 fix.go:112] recreateIfNeeded on cert-expiration-206200: state=Running err=<nil>
	W0709 12:23:37.072902   14920 fix.go:138] unexpected machine state, will restart: <nil>
	I0709 12:23:37.076086   14920 out.go:177] * Updating the running hyperv "cert-expiration-206200" VM ...
	I0709 12:23:34.352681   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:34.352968   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:34.365562   10600 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:34.366315   10600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.204.145 22 <nil> <nil>}
	I0709 12:23:34.366315   10600 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720553009
	I0709 12:23:34.519742   10600 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 19:23:29 UTC 2024
	
	I0709 12:23:34.519834   10600 fix.go:236] clock set: Tue Jul  9 19:23:29 UTC 2024
	 (err=<nil>)
	I0709 12:23:34.519834   10600 start.go:83] releasing machines lock for "kubernetes-upgrade-715200", held for 1m4.7548357s
	I0709 12:23:34.520096   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:37.036053   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:37.036549   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:37.036667   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:36.201469    3584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 12:23:37.079956   14920 machine.go:94] provisionDockerMachine start ...
	I0709 12:23:37.079956   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:23:39.662375   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:39.662375   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:39.662600   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:42.591735    3584 out.go:177] * Using the hyperv driver based on existing profile
	I0709 12:23:42.595530    3584 start.go:297] selected driver: hyperv
	I0709 12:23:42.595530    3584 start.go:901] validating driver "hyperv" against &{Name:running-upgrade-745900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade
-745900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.202.31 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0709 12:23:42.595530    3584 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0709 12:23:42.646726    3584 cni.go:84] Creating CNI manager for ""
	I0709 12:23:42.646726    3584 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 12:23:42.647418    3584 start.go:340] cluster config:
	{Name:running-upgrade-745900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-745900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.202.31 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0709 12:23:42.647817    3584 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 12:23:42.677355    3584 out.go:177] * Starting "running-upgrade-745900" primary control-plane node in "running-upgrade-745900" cluster
	I0709 12:23:40.114401   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:40.115277   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:40.123166   10600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 12:23:40.123166   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:40.139607   10600 ssh_runner.go:195] Run: cat /version.json
	I0709 12:23:40.139607   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-715200 ).state
	I0709 12:23:42.654280   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:42.654357   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:42.654357   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:42.693568   10600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:42.693568   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:42.693568   10600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:42.681405    3584 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0709 12:23:42.681613    3584 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4
	I0709 12:23:42.681613    3584 cache.go:56] Caching tarball of preloaded images
	I0709 12:23:42.682175    3584 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0709 12:23:42.682410    3584 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0709 12:23:42.682740    3584 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-745900\config.json ...
	I0709 12:23:42.685184    3584 start.go:360] acquireMachinesLock for running-upgrade-745900: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0709 12:23:42.720304   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:23:42.720304   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:42.725705   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:42.725705   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:23:42.725705   14920 main.go:141] libmachine: About to run SSH command:
	hostname
	I0709 12:23:42.878025   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-206200
	
	I0709 12:23:42.878025   14920 buildroot.go:166] provisioning hostname "cert-expiration-206200"
	I0709 12:23:42.878099   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:23:45.282001   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:45.282001   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:45.282001   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:45.443820   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:45.444492   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:45.444887   10600 sshutil.go:53] new ssh client: &{IP:172.18.204.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-715200\id_rsa Username:docker}
	I0709 12:23:45.475254   10600 main.go:141] libmachine: [stdout =====>] : 172.18.204.145
	
	I0709 12:23:45.475254   10600 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:45.475552   10600 sshutil.go:53] new ssh client: &{IP:172.18.204.145 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-715200\id_rsa Username:docker}
	I0709 12:23:45.552995   10600 ssh_runner.go:235] Completed: cat /version.json: (5.4133674s)
	I0709 12:23:45.566507   10600 ssh_runner.go:195] Run: systemctl --version
	I0709 12:23:47.581783   10600 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.457758s)
	I0709 12:23:47.581783   10600 ssh_runner.go:235] Completed: systemctl --version: (2.0152685s)
	W0709 12:23:47.581933   10600 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0709 12:23:47.582088   10600 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0709 12:23:47.582088   10600 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0709 12:23:47.596626   10600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0709 12:23:47.606753   10600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 12:23:47.618822   10600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0709 12:23:47.648384   10600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0709 12:23:47.681627   10600 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0709 12:23:47.681627   10600 start.go:494] detecting cgroup driver to use...
	I0709 12:23:47.681627   10600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 12:23:47.729245   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 12:23:47.763032   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 12:23:47.784949   10600 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 12:23:47.796951   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 12:23:47.831328   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 12:23:47.863612   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 12:23:47.903944   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 12:23:47.935644   10600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 12:23:47.967034   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 12:23:47.999038   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 12:23:48.038934   10600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 12:23:48.086785   10600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 12:23:48.123564   10600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 12:23:48.160873   10600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:23:48.454577   10600 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 12:23:48.496694   10600 start.go:494] detecting cgroup driver to use...
	I0709 12:23:48.511045   10600 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 12:23:48.562196   10600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 12:23:48.601773   10600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 12:23:48.652173   10600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 12:23:48.693124   10600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 12:23:48.719323   10600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 12:23:48.770012   10600 ssh_runner.go:195] Run: which cri-dockerd
	I0709 12:23:48.788361   10600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 12:23:48.808454   10600 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 12:23:48.861431   10600 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 12:23:49.145115   10600 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 12:23:47.826287   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:23:47.826287   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:47.832292   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:47.832292   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:23:47.832292   14920 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-206200 && echo "cert-expiration-206200" | sudo tee /etc/hostname
	I0709 12:23:47.991032   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-206200
	
	I0709 12:23:47.991032   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:23:50.216897   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:50.216897   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:50.217579   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:49.413707   10600 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 12:23:49.414030   10600 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 12:23:49.467127   10600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:23:49.739272   10600 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 12:23:52.807676   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:23:52.808690   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:52.815339   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:23:52.815454   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:23:52.815454   14920 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-206200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-206200/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-206200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0709 12:23:52.951966   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 12:23:52.951966   14920 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0709 12:23:52.951966   14920 buildroot.go:174] setting up certificates
	I0709 12:23:52.951966   14920 provision.go:84] configureAuth start
	I0709 12:23:52.951966   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:23:55.111694   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:55.112478   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:55.112478   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:23:57.702532   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:23:57.702532   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:57.703367   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:23:59.894135   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:23:59.894433   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:23:59.894433   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:02.486654   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:02.486654   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:02.486654   14920 provision.go:143] copyHostCerts
	I0709 12:24:02.487504   14920 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0709 12:24:02.487574   14920 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0709 12:24:02.488138   14920 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0709 12:24:02.489841   14920 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0709 12:24:02.489841   14920 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0709 12:24:02.490281   14920 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0709 12:24:02.491695   14920 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0709 12:24:02.491695   14920 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0709 12:24:02.491959   14920 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0709 12:24:02.493374   14920 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-206200 san=[127.0.0.1 172.18.195.41 cert-expiration-206200 localhost minikube]
	I0709 12:24:02.558535   14920 provision.go:177] copyRemoteCerts
	I0709 12:24:02.571772   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0709 12:24:02.571772   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:04.774947   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:04.774947   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:04.775252   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:07.389237   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:07.389237   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:07.390100   14920 sshutil.go:53] new ssh client: &{IP:172.18.195.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-206200\id_rsa Username:docker}
	I0709 12:24:07.505408   14920 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.933618s)
	I0709 12:24:07.506433   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0709 12:24:07.555848   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0709 12:24:07.608983   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0709 12:24:07.656824   14920 provision.go:87] duration metric: took 14.7048022s to configureAuth
	I0709 12:24:07.656824   14920 buildroot.go:189] setting minikube options for container-runtime
	I0709 12:24:07.657828   14920 config.go:182] Loaded profile config "cert-expiration-206200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 12:24:07.658842   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:09.830049   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:09.830049   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:09.830153   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:12.421268   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:12.421330   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:12.427005   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:24:12.427658   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:24:12.427658   14920 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0709 12:24:12.560707   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0709 12:24:12.560860   14920 buildroot.go:70] root file system type: tmpfs
	I0709 12:24:12.561005   14920 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0709 12:24:12.561125   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:14.734736   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:14.734736   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:14.734736   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:17.321167   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:17.321167   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:17.327680   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:24:17.328328   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:24:17.328478   14920 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0709 12:24:17.486565   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0709 12:24:17.486626   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:19.641019   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:19.641019   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:19.641019   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:22.255294   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:22.255294   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:22.262553   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:24:22.263084   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:24:22.263084   14920 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0709 12:24:22.418465   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0709 12:24:22.418465   14920 machine.go:97] duration metric: took 45.3383394s to provisionDockerMachine
	I0709 12:24:22.418527   14920 start.go:293] postStartSetup for "cert-expiration-206200" (driver="hyperv")
	I0709 12:24:22.418527   14920 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0709 12:24:22.431261   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0709 12:24:22.431261   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:24.620798   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:24.620798   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:24.620981   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:27.213391   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:27.213391   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:27.213916   14920 sshutil.go:53] new ssh client: &{IP:172.18.195.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-206200\id_rsa Username:docker}
	I0709 12:24:27.329829   14920 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.89855s)
	I0709 12:24:27.343148   14920 ssh_runner.go:195] Run: cat /etc/os-release
	I0709 12:24:27.350200   14920 info.go:137] Remote host: Buildroot 2023.02.9
	I0709 12:24:27.350200   14920 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0709 12:24:27.350200   14920 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0709 12:24:27.351840   14920 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem -> 150322.pem in /etc/ssl/certs
	I0709 12:24:27.365523   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0709 12:24:27.389350   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /etc/ssl/certs/150322.pem (1708 bytes)
	I0709 12:24:27.440688   14920 start.go:296] duration metric: took 5.0221421s for postStartSetup
	I0709 12:24:27.440688   14920 fix.go:56] duration metric: took 52.9198643s for fixHost
	I0709 12:24:27.440688   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:29.586323   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:29.586323   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:29.587199   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:32.133536   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:32.133536   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:32.154406   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:24:32.154406   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:24:32.154406   14920 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0709 12:24:32.281855   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720553072.289928427
	
	I0709 12:24:32.281855   14920 fix.go:216] guest clock: 1720553072.289928427
	I0709 12:24:32.281855   14920 fix.go:229] Guest: 2024-07-09 12:24:32.289928427 -0700 PDT Remote: 2024-07-09 12:24:27.4406883 -0700 PDT m=+255.879329501 (delta=4.849240127s)
	I0709 12:24:32.281976   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:34.363173   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:34.363173   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:34.363173   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:36.999743   14960 start.go:364] duration metric: took 2m26.024185s to acquireMachinesLock for "cert-options-402400"
	I0709 12:24:37.000715   14960 start.go:93] Provisioning new machine with config: &{Name:cert-options-402400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.2 ClusterName:cert-options-402400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0709 12:24:37.000850   14960 start.go:125] createHost starting for "" (driver="hyperv")
	I0709 12:24:37.006293   14960 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0709 12:24:37.006713   14960 start.go:159] libmachine.API.Create for "cert-options-402400" (driver="hyperv")
	I0709 12:24:37.006713   14960 client.go:168] LocalClient.Create starting
	I0709 12:24:37.007181   14960 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0709 12:24:37.007181   14960 main.go:141] libmachine: Decoding PEM data...
	I0709 12:24:37.007181   14960 main.go:141] libmachine: Parsing certificate...
	I0709 12:24:37.007734   14960 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0709 12:24:37.007925   14960 main.go:141] libmachine: Decoding PEM data...
	I0709 12:24:37.007925   14960 main.go:141] libmachine: Parsing certificate...
	I0709 12:24:37.007925   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0709 12:24:38.987093   14960 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0709 12:24:38.996615   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:38.996615   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0709 12:24:36.843797   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:36.843797   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:36.850215   14920 main.go:141] libmachine: Using SSH client type: native
	I0709 12:24:36.850617   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcaa940] 0xcad520 <nil>  [] 0s} 172.18.195.41 22 <nil> <nil>}
	I0709 12:24:36.850617   14920 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1720553072
	I0709 12:24:36.999743   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jul  9 19:24:32 UTC 2024
	
	I0709 12:24:36.999743   14920 fix.go:236] clock set: Tue Jul  9 19:24:32 UTC 2024
	 (err=<nil>)
	I0709 12:24:36.999743   14920 start.go:83] releasing machines lock for "cert-expiration-206200", held for 1m2.4791307s
	I0709 12:24:36.999743   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:39.187915   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:39.196589   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:39.196589   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:40.766042   14960 main.go:141] libmachine: [stdout =====>] : False
	
	I0709 12:24:40.766042   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:40.774465   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 12:24:42.321621   14960 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 12:24:42.321695   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:42.321775   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 12:24:41.743503   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:41.743503   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:41.761740   14920 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0709 12:24:41.761862   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:41.771607   14920 ssh_runner.go:195] Run: cat /version.json
	I0709 12:24:41.771607   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-206200 ).state
	I0709 12:24:44.030056   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:44.030056   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:44.030056   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:44.030450   14920 main.go:141] libmachine: [stdout =====>] : Running
	
	I0709 12:24:44.030450   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:44.030574   14920 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-206200 ).networkadapters[0]).ipaddresses[0]
	I0709 12:24:46.224978   14960 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 12:24:46.224978   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:46.239064   14960 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1720433170-19199-amd64.iso...
	I0709 12:24:46.675961   14960 main.go:141] libmachine: Creating SSH key...
	I0709 12:24:47.331880   14960 main.go:141] libmachine: Creating VM...
	I0709 12:24:47.331880   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0709 12:24:46.810924   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:46.810924   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:46.814706   14920 sshutil.go:53] new ssh client: &{IP:172.18.195.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-206200\id_rsa Username:docker}
	I0709 12:24:46.836674   14920 main.go:141] libmachine: [stdout =====>] : 172.18.195.41
	
	I0709 12:24:46.836674   14920 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:46.837927   14920 sshutil.go:53] new ssh client: &{IP:172.18.195.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-expiration-206200\id_rsa Username:docker}
	I0709 12:24:46.933729   14920 ssh_runner.go:235] Completed: cat /version.json: (5.1621029s)
	I0709 12:24:46.946477   14920 ssh_runner.go:195] Run: systemctl --version
	I0709 12:24:48.940943   14920 ssh_runner.go:235] Completed: systemctl --version: (1.9937877s)
	I0709 12:24:48.941175   14920 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.1793267s)
	W0709 12:24:48.941283   14920 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0709 12:24:48.941435   14920 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0709 12:24:48.941494   14920 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0709 12:24:48.956916   14920 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0709 12:24:48.966019   14920 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0709 12:24:48.978343   14920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0709 12:24:48.986883   14920 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0709 12:24:48.995341   14920 start.go:494] detecting cgroup driver to use...
	I0709 12:24:48.995532   14920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 12:24:49.046990   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0709 12:24:49.087977   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0709 12:24:49.110013   14920 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0709 12:24:49.121495   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0709 12:24:49.155893   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 12:24:49.189361   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0709 12:24:49.221388   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0709 12:24:49.257952   14920 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0709 12:24:49.299705   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0709 12:24:49.341208   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0709 12:24:49.373975   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0709 12:24:49.410142   14920 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0709 12:24:49.444960   14920 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0709 12:24:49.476834   14920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:24:49.740052   14920 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0709 12:24:49.771655   14920 start.go:494] detecting cgroup driver to use...
	I0709 12:24:49.785354   14920 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0709 12:24:49.829910   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 12:24:49.867031   14920 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0709 12:24:49.927468   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0709 12:24:49.968357   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0709 12:24:49.989469   14920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0709 12:24:50.038680   14920 ssh_runner.go:195] Run: which cri-dockerd
	I0709 12:24:50.056366   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0709 12:24:50.075766   14920 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0709 12:24:50.124980   14920 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0709 12:24:50.415099   14920 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0709 12:24:50.695464   14920 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0709 12:24:50.695691   14920 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0709 12:24:50.749146   14920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:24:51.029256   14920 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0709 12:24:50.450086   14960 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0709 12:24:50.450086   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:50.456469   14960 main.go:141] libmachine: Using switch "Default Switch"
	I0709 12:24:50.456593   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0709 12:24:52.228150   14960 main.go:141] libmachine: [stdout =====>] : True
	
	I0709 12:24:52.237044   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:52.237044   14960 main.go:141] libmachine: Creating VHD
	I0709 12:24:52.237044   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-options-402400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0709 12:24:55.972299   14960 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-options-402400\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7CA0A68D-A8A0-4871-A1F0-8C55A058D043
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0709 12:24:55.972299   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:55.972299   14960 main.go:141] libmachine: Writing magic tar header
	I0709 12:24:55.972299   14960 main.go:141] libmachine: Writing SSH key tar header
	I0709 12:24:55.981508   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-options-402400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-options-402400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0709 12:24:59.113401   14960 main.go:141] libmachine: [stdout =====>] : 
	I0709 12:24:59.113401   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:24:59.125881   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-options-402400\disk.vhd' -SizeBytes 20000MB
	I0709 12:25:01.121955   10600 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3824175s)
	I0709 12:25:01.135787   10600 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0709 12:25:01.206450   10600 out.go:177] 
	W0709 12:25:01.210298   10600 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 09 19:17:29 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.966775003Z" level=info msg="Starting up"
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.968062977Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:17:29 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:29.969722044Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.009364370Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039392812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039626308Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039769105Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.039843004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.040851685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.040999982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041266377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041633470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041659670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.041673970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.042219260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.043067444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046474580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046577479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046747375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.046788075Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.047356964Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.047418463Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.051727483Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.051965878Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052062277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052190474Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052214074Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052389871Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052662365Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.052874662Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053017159Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053061358Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053078858Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053096657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053111857Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053131357Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053148256Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053163656Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053177956Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053192056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053215655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053248955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053262954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053278454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053291554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053348653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053365352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053379852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053397452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053414252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053428651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053442351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053456051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053572349Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053610948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053628648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053659947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.053989241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054120038Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054142138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054157738Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054170037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054186037Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054198537Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054650529Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054795626Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054857325Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:17:30 kubernetes-upgrade-715200 dockerd[664]: time="2024-07-09T19:17:30.054879524Z" level=info msg="containerd successfully booted in 0.049404s"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.029698846Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.181028910Z" level=info msg="Loading containers: start."
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.580056459Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.727604493Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.826843234Z" level=info msg="Loading containers: done."
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.853540484Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.854533278Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.910034765Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:17:31 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:31.910362664Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:17:31 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:17:59 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.082648121Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084436643Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084934748Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.084980249Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:17:59 kubernetes-upgrade-715200 dockerd[658]: time="2024-07-09T19:17:59.085009149Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:18:00 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.157745091Z" level=info msg="Starting up"
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.159324709Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:00.162799450Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1180
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.203422329Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.234924400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235054402Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235131303Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235165203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235228804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235259804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235664309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235834711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.235873711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236081614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236308816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.236695221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.240766869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.240824470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241044172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241149373Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241186174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.241208074Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242317487Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242403188Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242430989Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242456789Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242482889Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.242709492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244334011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244628715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.244983819Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245208321Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245611726Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245701627Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245794028Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245856729Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.245981330Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246070131Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246105732Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246131832Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246169833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246227233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246244934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246258634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246273034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246286334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246299934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246314034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246331135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246347735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246360435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246373035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246386535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246405635Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246429436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246449736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246465136Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246578937Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246650138Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246665138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246679139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246692439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246706839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.246718439Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247096144Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247174645Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247227345Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:18:00 kubernetes-upgrade-715200 dockerd[1180]: time="2024-07-09T19:18:00.247256445Z" level=info msg="containerd successfully booted in 0.045735s"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.206803953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.242228370Z" level=info msg="Loading containers: start."
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.531765582Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.649880674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.747493425Z" level=info msg="Loading containers: done."
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.773254728Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.773336529Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.824065227Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:18:01 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:01.824183528Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:18:01 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.774201534Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:18:14 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.775592051Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.775940455Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.776005356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:18:14 kubernetes-upgrade-715200 dockerd[1174]: time="2024-07-09T19:18:14.776028456Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:18:15 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.850130313Z" level=info msg="Starting up"
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.851633531Z" level=info msg="containerd not running, starting managed containerd"
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:15.853362351Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1647
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.891670303Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920238439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920290340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920337041Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920354141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920419742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920441142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.920948648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921045949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921079549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921094050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921137950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.921669656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.924541690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.924683792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925030296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925138597Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925174198Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925205498Z" level=info msg="metadata content store policy set" policy=shared
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925458801Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925522902Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925544702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925561702Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925578402Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.925653503Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926076108Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926220310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926241410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926256610Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926273411Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926288911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926303411Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926319511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926335911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926350511Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926364212Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926376612Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926398612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926414112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926428712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926443513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926464413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926494213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926507613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926523013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926541814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926559514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926572214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926585614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926598514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926615215Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926637315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926651315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926664415Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926733016Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926764916Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926788717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926927118Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926949319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926964819Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.926977319Z" level=info msg="NRI interface is disabled by configuration."
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927233822Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927469725Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927551226Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 09 19:18:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:15.927592926Z" level=info msg="containerd successfully booted in 0.038342s"
	Jul 09 19:18:16 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:16.900369790Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 09 19:18:17 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:17.862515628Z" level=info msg="Loading containers: start."
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.130108181Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.248362375Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.348426754Z" level=info msg="Loading containers: done."
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.373377548Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.373530250Z" level=info msg="Daemon has completed initialization"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.425394261Z" level=info msg="API listen on [::]:2376"
	Jul 09 19:18:18 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:18.425552663Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 09 19:18:18 kubernetes-upgrade-715200 systemd[1]: Started Docker Application Container Engine.
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.650753418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.651190420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.651213630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.654037335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772544494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772735982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772778602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.772880749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.834567553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.840737704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.841115578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.842271212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903201366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903270098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903285005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:24 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:24.903377548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.053522345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.054271570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.054440943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.055093726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.272716661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.275791996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.276076019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.276420969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.423097017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.423275694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.429780017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.430225310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493079585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493217245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493250059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:25 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:25.493431038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.460647619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.461853702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.462263732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.467682453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559414882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559511713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559524517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.559619947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.571996677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.572478830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.572711104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:30 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:30.573665507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.508476793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.509241115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.512646505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.514100928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.605366956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.610946078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611019699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611505240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.611673489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.614399782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.618975812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:31 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:31.623421704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:42.797691178Z" level=info msg="ignoring event" container=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800314414Z" level=info msg="shim disconnected" id=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800755553Z" level=warning msg="cleaning up after shim disconnected" id=35f7cfaaefdaaad81315e73b6e50a7f238adf0cf840fb3daa9065fe0c362e99f namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.800881036Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:42.966610764Z" level=info msg="ignoring event" container=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.967549334Z" level=info msg="shim disconnected" id=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.969217003Z" level=warning msg="cleaning up after shim disconnected" id=c3bed982ed71cc9c45611886a5bab9569cf1c4fb29cd402afdc3169ce4718f44 namespace=moby
	Jul 09 19:18:42 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:42.969364283Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480480108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480663283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.480679181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:43 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:43.481408581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084528363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084628965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.084650166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.085508091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399448146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399703353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.399753555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.400076264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.546749148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.547357565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.547489769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:44 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:44.548044785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:48.083380401Z" level=info msg="ignoring event" container=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084430488Z" level=info msg="shim disconnected" id=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084650586Z" level=warning msg="cleaning up after shim disconnected" id=0417adc8bc14a6f1318285fef712d75f5e7028442eaec5a4188798c67e10d99d namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.084672385Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.111624250Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:18:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:18:48.274455627Z" level=info msg="ignoring event" container=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.274737724Z" level=info msg="shim disconnected" id=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.275200318Z" level=warning msg="cleaning up after shim disconnected" id=10a196cc18370a42c22ece76a1707fac4d7d24802708b9d37bb3ace0223306b2 namespace=moby
	Jul 09 19:18:48 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:18:48.275332816Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:19:01.883153039Z" level=info msg="ignoring event" container=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883398143Z" level=info msg="shim disconnected" id=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883459044Z" level=warning msg="cleaning up after shim disconnected" id=af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c namespace=moby
	Jul 09 19:19:01 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:01.883468344Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033836975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033970377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.033986477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:19:15 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:19:15.034766790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:20:41.643665911Z" level=info msg="ignoring event" container=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.644425122Z" level=info msg="shim disconnected" id=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.647429764Z" level=warning msg="cleaning up after shim disconnected" id=311161079c9a349140940fd392cde25216666467dfbe15d0356bb1105c9ff236 namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.647513265Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858363355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858577758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858601459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:20:41 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:20:41.858975864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245264804Z" level=info msg="shim disconnected" id=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245426406Z" level=warning msg="cleaning up after shim disconnected" id=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.245440906Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:22:33.247542631Z" level=info msg="ignoring event" container=a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.472994490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473078191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473092592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:22:33 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:22:33.473631998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:49.770756179Z" level=info msg="Processing signal 'terminated'"
	Jul 09 19:23:49 kubernetes-upgrade-715200 systemd[1]: Stopping Docker Application Container Engine...
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.970089135Z" level=info msg="shim disconnected" id=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e namespace=moby
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.970898038Z" level=warning msg="cleaning up after shim disconnected" id=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e namespace=moby
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:49.971512341Z" level=info msg="ignoring event" container=f36488950ae11b8fcff7532e726a3cdc9380a54a4973499e5a3df029656f5a3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:49 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:49.972001343Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.014132324Z" level=info msg="ignoring event" container=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015002828Z" level=info msg="shim disconnected" id=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015074628Z" level=warning msg="cleaning up after shim disconnected" id=30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.015091128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.042828347Z" level=info msg="ignoring event" container=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.045966761Z" level=info msg="shim disconnected" id=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.048715572Z" level=warning msg="cleaning up after shim disconnected" id=7a55d187e689fc7831d3e5e90c8cdd2383af903bcd5872d4cf6c34a1b388b380 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.048854873Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.051459484Z" level=info msg="ignoring event" container=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053463093Z" level=info msg="shim disconnected" id=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053586193Z" level=warning msg="cleaning up after shim disconnected" id=1ab1a023780f9d559a0b7f322662f69b1f3dfff428f277a11a905ff7165b9a71 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.053802094Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093342764Z" level=info msg="shim disconnected" id=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093431664Z" level=warning msg="cleaning up after shim disconnected" id=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.093453464Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.100765696Z" level=info msg="ignoring event" container=d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.100857696Z" level=info msg="ignoring event" container=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.100613095Z" level=info msg="shim disconnected" id=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.102771604Z" level=warning msg="cleaning up after shim disconnected" id=26c45188ff0e57c02752b8b5c7cb7db13ff3565f3c4475893093da999ad2448d namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.112279745Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113293350Z" level=info msg="shim disconnected" id=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113399450Z" level=warning msg="cleaning up after shim disconnected" id=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.113482550Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.112231145Z" level=info msg="shim disconnected" id=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.117674268Z" level=warning msg="cleaning up after shim disconnected" id=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.117745869Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.119850678Z" level=info msg="shim disconnected" id=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.120075879Z" level=warning msg="cleaning up after shim disconnected" id=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.120293480Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123305193Z" level=info msg="shim disconnected" id=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123362093Z" level=warning msg="cleaning up after shim disconnected" id=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.123374593Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.123762395Z" level=info msg="ignoring event" container=2402bcb10993450faa7e2f67ffbc8039db9fc743b2cb23624ca09f9fc5977909 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124251197Z" level=info msg="ignoring event" container=85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124398797Z" level=info msg="ignoring event" container=90ecf4c718ddc8ee59cb1f952e4c34ab160349d676fe1cb69986996deaea9152 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.124757399Z" level=info msg="ignoring event" container=fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.132488832Z" level=info msg="shim disconnected" id=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.132918834Z" level=info msg="ignoring event" container=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.133509736Z" level=warning msg="cleaning up after shim disconnected" id=ebedd5df1e7ddf4730b97fd4d185b66a6aaa78b164467717c5c437de2bc63d36 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.133727737Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.184719556Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:23:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303317466Z" level=info msg="shim disconnected" id=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303376966Z" level=warning msg="cleaning up after shim disconnected" id=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.303388466Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:50.307043182Z" level=warning msg="cleanup warnings time=\"2024-07-09T19:23:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 09 19:23:50 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:50.308132086Z" level=info msg="ignoring event" container=0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.843491962Z" level=info msg="shim disconnected" id=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.843849264Z" level=warning msg="cleaning up after shim disconnected" id=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:54.844061564Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:23:54 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:54.847986081Z" level=info msg="ignoring event" container=16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:59.907304255Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:23:59.948124666Z" level=info msg="ignoring event" container=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.948871575Z" level=info msg="shim disconnected" id=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 namespace=moby
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.948980177Z" level=warning msg="cleaning up after shim disconnected" id=08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7 namespace=moby
	Jul 09 19:23:59 kubernetes-upgrade-715200 dockerd[1647]: time="2024-07-09T19:23:59.949010977Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.017412625Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018166734Z" level=info msg="Daemon shutdown complete"
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018335036Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 09 19:24:00 kubernetes-upgrade-715200 dockerd[1640]: time="2024-07-09T19:24:00.018348536Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Deactivated successfully.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Consumed 11.555s CPU time.
	Jul 09 19:24:01 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	Jul 09 19:24:01 kubernetes-upgrade-715200 dockerd[5468]: time="2024-07-09T19:24:01.094168712Z" level=info msg="Starting up"
	Jul 09 19:25:01 kubernetes-upgrade-715200 dockerd[5468]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 19:25:01 kubernetes-upgrade-715200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0709 12:25:01.213682   10600 out.go:239] * 
	W0709 12:25:01.215371   10600 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0709 12:25:01.221924   10600 out.go:177] 
	I0709 12:25:01.620385   14960 main.go:141] libmachine: [stdout =====>] : 
	I0709 12:25:01.620385   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:25:01.625154   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM cert-options-402400 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\cert-options-402400' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0709 12:25:04.038444   14920 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0091401s)
	I0709 12:25:04.050989   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0709 12:25:04.102799   14920 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0709 12:25:04.159384   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 12:25:04.205021   14920 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0709 12:25:04.457884   14920 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0709 12:25:04.671919   14920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:25:04.923316   14920 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0709 12:25:04.976890   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0709 12:25:05.015531   14920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:25:05.243406   14920 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0709 12:25:05.377257   14920 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0709 12:25:05.389731   14920 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0709 12:25:05.402583   14920 start.go:562] Will wait 60s for crictl version
	I0709 12:25:05.416141   14920 ssh_runner.go:195] Run: which crictl
	I0709 12:25:05.437860   14920 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0709 12:25:05.513842   14920 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0709 12:25:05.526832   14920 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 12:25:05.577615   14920 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0709 12:25:05.623880   14920 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0709 12:25:05.623880   14920 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0709 12:25:05.630775   14920 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0709 12:25:05.630775   14920 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0709 12:25:05.630775   14920 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0709 12:25:05.630775   14920 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:d8:ef:aa Flags:up|broadcast|multicast|running}
	I0709 12:25:05.632461   14920 ip.go:210] interface addr: fe80::151e:1f2f:657a:4d46/64
	I0709 12:25:05.632461   14920 ip.go:210] interface addr: 172.18.192.1/20
	I0709 12:25:05.652532   14920 ssh_runner.go:195] Run: grep 172.18.192.1	host.minikube.internal$ /etc/hosts
	I0709 12:25:05.664797   14920 kubeadm.go:877] updating cluster {Name:cert-expiration-206200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:cert-expiration-206200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.195.41 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0709 12:25:05.665347   14920 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 12:25:05.676313   14920 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 12:25:05.707007   14920 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 12:25:05.707007   14920 docker.go:615] Images already preloaded, skipping extraction
	I0709 12:25:05.722652   14920 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0709 12:25:05.748968   14920 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0709 12:25:05.748968   14920 cache_images.go:84] Images are preloaded, skipping loading
	I0709 12:25:05.748968   14920 kubeadm.go:928] updating node { 172.18.195.41 8443 v1.30.2 docker true true} ...
	I0709 12:25:05.749254   14920 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-206200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.195.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:cert-expiration-206200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0709 12:25:05.758095   14920 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0709 12:25:05.802482   14920 cni.go:84] Creating CNI manager for ""
	I0709 12:25:05.802550   14920 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 12:25:05.802550   14920 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0709 12:25:05.802613   14920 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.195.41 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-206200 NodeName:cert-expiration-206200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.195.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.195.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0709 12:25:05.802945   14920 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.195.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "cert-expiration-206200"
	  kubeletExtraArgs:
	    node-ip: 172.18.195.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.195.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0709 12:25:05.814365   14920 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0709 12:25:05.835278   14920 binaries.go:44] Found k8s binaries, skipping transfer
	I0709 12:25:05.848275   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0709 12:25:05.867585   14920 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0709 12:25:05.900608   14920 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0709 12:25:05.936702   14920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0709 12:25:05.998016   14920 ssh_runner.go:195] Run: grep 172.18.195.41	control-plane.minikube.internal$ /etc/hosts
	I0709 12:25:06.018764   14920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0709 12:25:06.271488   14920 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0709 12:25:06.319831   14920 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200 for IP: 172.18.195.41
	I0709 12:25:06.319977   14920 certs.go:194] generating shared ca certs ...
	I0709 12:25:06.319977   14920 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 12:25:06.320678   14920 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0709 12:25:06.321061   14920 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0709 12:25:06.321242   14920 certs.go:256] generating profile certs ...
	W0709 12:25:06.322025   14920 out.go:239] ! Certificate client.crt has expired. Generating a new one...
	I0709 12:25:06.322025   14920 certs.go:624] cert expired C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\client.crt: expiration: 2024-07-09 19:19:34 +0000 UTC, now: 2024-07-09 12:25:06.322025 -0700 PDT m=+294.760522401
	I0709 12:25:06.322987   14920 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\client.key
	I0709 12:25:06.323191   14920 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\client.crt with IP's: []
	I0709 12:25:06.486192   14920 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\client.crt ...
	I0709 12:25:06.486192   14920 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\client.crt: {Name:mk093bd9a72d4a697d8055f3583ea4de7e7ced96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 12:25:06.486192   14920 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\client.key ...
	I0709 12:25:06.486192   14920 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\client.key: {Name:mk4d0746a6590a574d01f1b9f7c060d0e66b3a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0709 12:25:06.496405   14920 out.go:239] ! Certificate apiserver.crt.06e96b97 has expired. Generating a new one...
	I0709 12:25:06.496405   14920 certs.go:624] cert expired C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.crt.06e96b97: expiration: 2024-07-09 19:19:34 +0000 UTC, now: 2024-07-09 12:25:06.496405 -0700 PDT m=+294.934901701
	I0709 12:25:06.496904   14920 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.key.06e96b97
	I0709 12:25:06.496904   14920 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.crt.06e96b97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.195.41]
	I0709 12:25:05.495756   14960 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	cert-options-402400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0709 12:25:05.495756   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:25:05.495756   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName cert-options-402400 -DynamicMemoryEnabled $false
	I0709 12:25:07.979647   14960 main.go:141] libmachine: [stdout =====>] : 
	I0709 12:25:07.979647   14960 main.go:141] libmachine: [stderr =====>] : 
	I0709 12:25:07.982869   14960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor cert-options-402400 -Count 2
	I0709 12:25:06.663717   14920 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.crt.06e96b97 ...
	I0709 12:25:06.663717   14920 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.crt.06e96b97: {Name:mk0611c9688f92e2957dcad137118f03d8c0a81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 12:25:06.663717   14920 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.key.06e96b97 ...
	I0709 12:25:06.663717   14920 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.key.06e96b97: {Name:mk9da1fc23791bec7dfe2b6c9af4a2be6f6076d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 12:25:06.663717   14920 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.crt.06e96b97 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.crt
	I0709 12:25:06.675963   14920 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.key.06e96b97 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.key
	W0709 12:25:06.675963   14920 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0709 12:25:06.675963   14920 certs.go:624] cert expired C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.crt: expiration: 2024-07-09 19:19:34 +0000 UTC, now: 2024-07-09 12:25:06.675963 -0700 PDT m=+295.114459001
	I0709 12:25:06.679657   14920 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.key
	I0709 12:25:06.679657   14920 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.crt with IP's: []
	I0709 12:25:07.227542   14920 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.crt ...
	I0709 12:25:07.227542   14920 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.crt: {Name:mk806c7591cc384503cd1fd368b1098a76a23a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 12:25:07.227542   14920 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.key ...
	I0709 12:25:07.227542   14920 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.key: {Name:mk365e9d5930e5a36d3aa8d55736679cf6bcb858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 12:25:07.252244   14920 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem (1338 bytes)
	W0709 12:25:07.252717   14920 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032_empty.pem, impossibly tiny 0 bytes
	I0709 12:25:07.252717   14920 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0709 12:25:07.253533   14920 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0709 12:25:07.253780   14920 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0709 12:25:07.254390   14920 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0709 12:25:07.255589   14920 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem (1708 bytes)
	I0709 12:25:07.258281   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0709 12:25:07.371151   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0709 12:25:07.452082   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0709 12:25:07.520518   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0709 12:25:07.624375   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0709 12:25:07.699282   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0709 12:25:07.752915   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0709 12:25:07.803046   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\cert-expiration-206200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0709 12:25:07.878882   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\15032.pem --> /usr/share/ca-certificates/15032.pem (1338 bytes)
	I0709 12:25:07.960635   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\150322.pem --> /usr/share/ca-certificates/150322.pem (1708 bytes)
	I0709 12:25:08.039557   14920 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0709 12:25:08.141343   14920 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0709 12:25:08.217883   14920 ssh_runner.go:195] Run: openssl version
	I0709 12:25:08.243063   14920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0709 12:25:08.294490   14920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0709 12:25:08.307164   14920 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  9 16:40 /usr/share/ca-certificates/minikubeCA.pem
	I0709 12:25:08.318998   14920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0709 12:25:08.353995   14920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0709 12:25:08.398159   14920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15032.pem && ln -fs /usr/share/ca-certificates/15032.pem /etc/ssl/certs/15032.pem"
	I0709 12:25:08.455387   14920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15032.pem
	I0709 12:25:08.472134   14920 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  9 16:56 /usr/share/ca-certificates/15032.pem
	I0709 12:25:08.489345   14920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15032.pem
	I0709 12:25:08.530495   14920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15032.pem /etc/ssl/certs/51391683.0"
	I0709 12:25:08.570922   14920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150322.pem && ln -fs /usr/share/ca-certificates/150322.pem /etc/ssl/certs/150322.pem"
	I0709 12:25:08.613069   14920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150322.pem
	I0709 12:25:08.625195   14920 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  9 16:56 /usr/share/ca-certificates/150322.pem
	I0709 12:25:08.642296   14920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150322.pem
	I0709 12:25:08.682730   14920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150322.pem /etc/ssl/certs/3ec20f2e.0"
	I0709 12:25:08.732655   14920 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0709 12:25:08.763299   14920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0709 12:25:08.796341   14920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0709 12:25:08.837707   14920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0709 12:25:08.864929   14920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0709 12:25:08.893001   14920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0709 12:25:08.919813   14920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0709 12:25:08.938594   14920 kubeadm.go:391] StartCluster: {Name:cert-expiration-206200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.2 ClusterName:cert-expiration-206200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.195.41 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 12:25:08.948491   14920 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 12:25:09.060224   14920 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0709 12:25:09.090904   14920 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0709 12:25:09.090979   14920 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0709 12:25:09.090979   14920 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0709 12:25:09.103545   14920 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0709 12:25:09.123086   14920 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0709 12:25:09.124635   14920 kubeconfig.go:125] found "cert-expiration-206200" server: "https://172.18.195.41:8443"
	I0709 12:25:09.138133   14920 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0709 12:25:09.162872   14920 kubeadm.go:624] The running cluster does not require reconfiguration: 172.18.195.41
	I0709 12:25:09.163052   14920 kubeadm.go:1154] stopping kube-system containers ...
	I0709 12:25:09.173276   14920 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0709 12:25:09.217838   14920 docker.go:483] Stopping containers: [94691ef82242 75b94093dec7 dfd5ce2e426c ceb175fb84d4 d48a242e5101 4cba679ebcfb 8e22ed783b0f 0eb1a6ddd7cc 621097aedd36 f375e0ca10b9 fb55ca6f5c3d 9aa52432805c 363dfdcabdd2 d72202cafa5f d8e4e2ccc6dc 131a3afc0d25 5ea08a29d388 05ae90c98fcd 616ba6dded5d 84c25de159bd 3d6a02c855de 7553808f2ec3 555835d20c1e c55c2bbf6e1d 58d0c433c59e 3ce46aa1835c 76049bdd9ef4 2d2938b4a3a5 581b7eca3b36 82c16a8e8a16 d336a440e90c]
	I0709 12:25:09.231946   14920 ssh_runner.go:195] Run: docker stop 94691ef82242 75b94093dec7 dfd5ce2e426c ceb175fb84d4 d48a242e5101 4cba679ebcfb 8e22ed783b0f 0eb1a6ddd7cc 621097aedd36 f375e0ca10b9 fb55ca6f5c3d 9aa52432805c 363dfdcabdd2 d72202cafa5f d8e4e2ccc6dc 131a3afc0d25 5ea08a29d388 05ae90c98fcd 616ba6dded5d 84c25de159bd 3d6a02c855de 7553808f2ec3 555835d20c1e c55c2bbf6e1d 58d0c433c59e 3ce46aa1835c 76049bdd9ef4 2d2938b4a3a5 581b7eca3b36 82c16a8e8a16 d336a440e90c
	I0709 12:25:10.772493   14920 ssh_runner.go:235] Completed: docker stop 94691ef82242 75b94093dec7 dfd5ce2e426c ceb175fb84d4 d48a242e5101 4cba679ebcfb 8e22ed783b0f 0eb1a6ddd7cc 621097aedd36 f375e0ca10b9 fb55ca6f5c3d 9aa52432805c 363dfdcabdd2 d72202cafa5f d8e4e2ccc6dc 131a3afc0d25 5ea08a29d388 05ae90c98fcd 616ba6dded5d 84c25de159bd 3d6a02c855de 7553808f2ec3 555835d20c1e c55c2bbf6e1d 58d0c433c59e 3ce46aa1835c 76049bdd9ef4 2d2938b4a3a5 581b7eca3b36 82c16a8e8a16 d336a440e90c: (1.5404796s)
	I0709 12:25:10.786558   14920 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0709 12:25:10.865047   14920 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0709 12:25:10.885911   14920 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jul  9 19:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jul  9 19:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul  9 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Jul  9 19:16 /etc/kubernetes/scheduler.conf
	
	I0709 12:25:10.912562   14920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0709 12:25:10.955626   14920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0709 12:25:11.002467   14920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0709 12:25:11.028813   14920 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0709 12:25:11.055430   14920 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0709 12:25:11.088737   14920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0709 12:25:11.101507   14920 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0709 12:25:11.125221   14920 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0709 12:25:11.172384   14920 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0709 12:25:11.200897   14920 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0709 12:25:11.342247   14920 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	
	
	==> Docker <==
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID '08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '08cdb2e045bd8714e10e23c3678d855bb5fe24fde5f9698b6c631cca2fa3aba7'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID 'af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'af8bbeb8a7c5674d3ce16e290b321999d57f285dd2860dfec7221b605d97328c'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID 'd750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd750c5dc1d8f41389b90fbf83f859f1b97900893c9675cf1735bec1c62ab04a1'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID '0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0450e946881cc00d4f2a0091d9e2bd0f1f3c8ef4d01a84864453e6abe7deedb5'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID '30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '30039844b3a0fde093e8c1cfd6a41261773efef4419fba3d1c1620bfaa637176'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID 'fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fac79cbe439702899a3d248d82d0a75c5f4c95a9ed8c23ba1b26e02442afd4a0'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID '16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '16e9716800e3794873e9069d2a706493947c8a93b48d04505e7c1931a9d53541'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID '85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '85eae8eff7b5ec67909e8ff50a72b951fd294e998427ab713c6cde905030f9f3'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="error getting RW layer size for container ID 'a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:26:01 kubernetes-upgrade-715200 cri-dockerd[1436]: time="2024-07-09T19:26:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a6ecff5c5b857e4f11b75f528bc50e0f3f2752e483c96510a9f399e51dbf9003'"
	Jul 09 19:26:01 kubernetes-upgrade-715200 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 09 19:26:01 kubernetes-upgrade-715200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 09 19:26:01 kubernetes-upgrade-715200 systemd[1]: Stopped Docker Application Container Engine.
	Jul 09 19:26:01 kubernetes-upgrade-715200 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-09T19:26:03Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +27.980521] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +0.109638] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.586845] systemd-fstab-generator[1140]: Ignoring "noauto" option for root device
	[  +0.211393] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +0.225550] systemd-fstab-generator[1166]: Ignoring "noauto" option for root device
	[Jul 9 19:18] systemd-fstab-generator[1389]: Ignoring "noauto" option for root device
	[  +0.213715] systemd-fstab-generator[1401]: Ignoring "noauto" option for root device
	[  +0.236716] systemd-fstab-generator[1413]: Ignoring "noauto" option for root device
	[  +0.294032] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.126366] kauditd_printk_skb: 180 callbacks suppressed
	[ +11.737508] systemd-fstab-generator[1632]: Ignoring "noauto" option for root device
	[  +0.109092] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.094979] systemd-fstab-generator[1910]: Ignoring "noauto" option for root device
	[  +4.575544] systemd-fstab-generator[2071]: Ignoring "noauto" option for root device
	[  +0.123415] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.117548] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.748592] systemd-fstab-generator[2884]: Ignoring "noauto" option for root device
	[  +7.005103] hrtimer: interrupt took 1077143 ns
	[  +3.415049] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.286853] kauditd_printk_skb: 38 callbacks suppressed
	[Jul 9 19:23] systemd-fstab-generator[4982]: Ignoring "noauto" option for root device
	[  +0.675350] systemd-fstab-generator[5032]: Ignoring "noauto" option for root device
	[  +0.307721] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[  +0.305593] systemd-fstab-generator[5059]: Ignoring "noauto" option for root device
	[  +5.278789] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 19:27:01 up 10 min,  0 users,  load average: 0.16, 0.57, 0.37
	Linux kubernetes-upgrade-715200 5.10.207 #1 SMP Mon Jul 8 14:53:58 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 09 19:26:55 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:55.458088    2078 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-upgrade-715200.17e0a1dd73a78e68\": dial tcp 172.18.204.145:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-715200.17e0a1dd73a78e68  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-715200,UID:07a946eeeb0949b1243519c0b5d2ce34,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.18.204.145:8443/readyz\": dial tcp 172.18.204.145:8443: connect: connection refused,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-715200,},FirstTimestamp:2024-07-09 19:23:50.155734
632 +0000 UTC m=+326.604968923,LastTimestamp:2024-07-09 19:23:51.155306524 +0000 UTC m=+327.604540915,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-715200,}"
	Jul 09 19:26:57 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:57.026912    2078 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-715200?timeout=10s\": dial tcp 172.18.204.145:8443: connect: connection refused" interval="7s"
	Jul 09 19:26:57 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:57.457726    2078 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.309161495s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 09 19:26:59 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:59.565137    2078 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-715200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-715200?resourceVersion=0&timeout=10s\": dial tcp 172.18.204.145:8443: connect: connection refused"
	Jul 09 19:26:59 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:59.565927    2078 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-715200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-715200?timeout=10s\": dial tcp 172.18.204.145:8443: connect: connection refused"
	Jul 09 19:26:59 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:59.566999    2078 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-715200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-715200?timeout=10s\": dial tcp 172.18.204.145:8443: connect: connection refused"
	Jul 09 19:26:59 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:59.567884    2078 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-715200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-715200?timeout=10s\": dial tcp 172.18.204.145:8443: connect: connection refused"
	Jul 09 19:26:59 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:59.568901    2078 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-715200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-715200?timeout=10s\": dial tcp 172.18.204.145:8443: connect: connection refused"
	Jul 09 19:26:59 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:26:59.568990    2078 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.728919    2078 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.730110    2078 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.731020    2078 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.731413    2078 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.731993    2078 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.732110    2078 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: I0709 19:27:01.732246    2078 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.732596    2078 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.733023    2078 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.733118    2078 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.733618    2078 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.734419    2078 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.735160    2078 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.738320    2078 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.738374    2078 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 09 19:27:01 kubernetes-upgrade-715200 kubelet[2078]: E0709 19:27:01.738812    2078 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:25:14.882824    9284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0709 12:26:01.352409    9284 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0709 12:26:01.389000    9284 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0709 12:26:01.429203    9284 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0709 12:26:01.461941    9284 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0709 12:26:01.498173    9284 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0709 12:26:01.537214    9284 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0709 12:26:01.569551    9284 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0709 12:26:01.604592    9284 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-715200 -n kubernetes-upgrade-715200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-715200 -n kubernetes-upgrade-715200: exit status 2 (13.3715098s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:27:02.837549    1116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-715200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-715200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-715200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-715200: (1m4.0105782s)
--- FAIL: TestKubernetesUpgrade (1430.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-492900 --driver=hyperv
E0709 12:05:30.101849   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-492900 --driver=hyperv: exit status 1 (4m59.7023123s)

                                                
                                                
-- stdout --
	* [NoKubernetes-492900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-492900" primary control-plane node in "NoKubernetes-492900" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:04:30.097990   15264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-492900 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-492900 -n NoKubernetes-492900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-492900 -n NoKubernetes-492900: exit status 7 (161.55ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:09:29.775003   14492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-492900" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.87s)

                                                
                                    

Test pass (148/196)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.35
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.19
9 TestDownloadOnly/v1.20.0/DeleteAll 1.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.1
12 TestDownloadOnly/v1.30.2/json-events 12.41
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.18
18 TestDownloadOnly/v1.30.2/DeleteAll 1.21
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 1.17
21 TestBinaryMirror 6.71
22 TestOffline 527.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 437.52
30 TestAddons/parallel/Ingress 71.32
31 TestAddons/parallel/InspektorGadget 27.72
32 TestAddons/parallel/MetricsServer 21.23
33 TestAddons/parallel/HelmTiller 30.33
35 TestAddons/parallel/CSI 100.85
36 TestAddons/parallel/Headlamp 37.81
37 TestAddons/parallel/CloudSpanner 19.97
38 TestAddons/parallel/LocalPath 81.23
39 TestAddons/parallel/NvidiaDevicePlugin 21.91
40 TestAddons/parallel/Yakd 5.02
41 TestAddons/parallel/Volcano 51.6
44 TestAddons/serial/GCPAuth/Namespaces 0.32
45 TestAddons/StoppedEnableDisable 52.16
46 TestCertOptions 426.93
47 TestCertExpiration 1008.53
48 TestDockerFlags 527.66
49 TestForceSystemdFlag 244.44
50 TestForceSystemdEnv 488.35
57 TestErrorSpam/start 17.22
58 TestErrorSpam/status 37.03
59 TestErrorSpam/pause 22.79
60 TestErrorSpam/unpause 23.02
61 TestErrorSpam/stop 56.54
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 230.38
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 126.8
68 TestFunctional/serial/KubeContext 0.13
69 TestFunctional/serial/KubectlGetPods 0.21
72 TestFunctional/serial/CacheCmd/cache/add_remote 25.38
73 TestFunctional/serial/CacheCmd/cache/add_local 10.92
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.18
75 TestFunctional/serial/CacheCmd/cache/list 0.18
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 35.04
78 TestFunctional/serial/CacheCmd/cache/delete 0.35
79 TestFunctional/serial/MinikubeKubectlCmd 0.42
81 TestFunctional/serial/ExtraConfig 129.81
82 TestFunctional/serial/ComponentHealth 0.18
83 TestFunctional/serial/LogsCmd 8.51
84 TestFunctional/serial/LogsFileCmd 10.78
85 TestFunctional/serial/InvalidService 21.77
91 TestFunctional/parallel/StatusCmd 42.28
95 TestFunctional/parallel/ServiceCmdConnect 28.24
96 TestFunctional/parallel/AddonsCmd 0.67
97 TestFunctional/parallel/PersistentVolumeClaim 40.93
99 TestFunctional/parallel/SSHCmd 20.98
100 TestFunctional/parallel/CpCmd 59.84
101 TestFunctional/parallel/MySQL 67.79
102 TestFunctional/parallel/FileSync 11.26
103 TestFunctional/parallel/CertSync 64.14
107 TestFunctional/parallel/NodeLabels 0.24
109 TestFunctional/parallel/NonActiveRuntimeDisabled 12.02
111 TestFunctional/parallel/License 3.43
112 TestFunctional/parallel/DockerEnv/powershell 47.69
113 TestFunctional/parallel/UpdateContextCmd/no_changes 2.56
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.67
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 3.04
116 TestFunctional/parallel/ServiceCmd/DeployApp 36.65
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.91
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.58
122 TestFunctional/parallel/ServiceCmd/List 13.88
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
129 TestFunctional/parallel/ServiceCmd/JSONOutput 13.85
131 TestFunctional/parallel/ProfileCmd/profile_not_create 11.92
133 TestFunctional/parallel/ProfileCmd/profile_list 13
134 TestFunctional/parallel/Version/short 0.32
135 TestFunctional/parallel/Version/components 8.62
136 TestFunctional/parallel/ImageCommands/ImageListShort 7.84
137 TestFunctional/parallel/ImageCommands/ImageListTable 7.44
138 TestFunctional/parallel/ImageCommands/ImageListJson 7.82
139 TestFunctional/parallel/ImageCommands/ImageListYaml 7.92
140 TestFunctional/parallel/ImageCommands/ImageBuild 25.89
141 TestFunctional/parallel/ImageCommands/Setup 4.63
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 25.17
144 TestFunctional/parallel/ProfileCmd/profile_json_output 10.88
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.48
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 24.4
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.38
148 TestFunctional/parallel/ImageCommands/ImageRemove 13.96
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 16.06
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 8.85
151 TestFunctional/delete_addon-resizer_images 0.48
152 TestFunctional/delete_my-image_image 0.18
153 TestFunctional/delete_minikube_cached_images 0.19
157 TestMultiControlPlane/serial/StartCluster 716
158 TestMultiControlPlane/serial/DeployApp 11.38
160 TestMultiControlPlane/serial/AddWorkerNode 261.26
161 TestMultiControlPlane/serial/NodeLabels 0.19
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.45
163 TestMultiControlPlane/serial/CopyFile 645.73
167 TestImageBuild/serial/Setup 197.27
168 TestImageBuild/serial/NormalBuild 9.75
169 TestImageBuild/serial/BuildWithBuildArg 9
170 TestImageBuild/serial/BuildWithDockerIgnore 7.76
171 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.67
175 TestJSONOutput/start/Command 211.79
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 7.88
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 7.67
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 34.13
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.31
203 TestMainNoArgs 0.17
204 TestMinikubeProfile 529.43
207 TestMountStart/serial/StartWithMountFirst 156.31
208 TestMountStart/serial/VerifyMountFirst 9.49
209 TestMountStart/serial/StartWithMountSecond 158.45
210 TestMountStart/serial/VerifyMountSecond 9.61
211 TestMountStart/serial/DeleteFirst 31.37
212 TestMountStart/serial/VerifyMountPostDelete 9.49
213 TestMountStart/serial/Stop 26.5
214 TestMountStart/serial/RestartStopped 118.95
215 TestMountStart/serial/VerifyMountPostStop 9.42
222 TestMultiNode/serial/MultiNodeLabels 0.18
223 TestMultiNode/serial/ProfileList 9.64
230 TestPreload 517.56
231 TestScheduledStopWindows 325.04
236 TestRunningBinaryUpgrade 851.56
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.34
x
+
TestDownloadOnly/v1.20.0/json-events (22.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-955600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-955600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (22.3536806s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-955600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-955600: exit status 85 (190.1938ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-955600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT |          |
	|         | -p download-only-955600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 09:37:23
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 09:37:23.385343    5452 out.go:291] Setting OutFile to fd 612 ...
	I0709 09:37:23.386499    5452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 09:37:23.386499    5452 out.go:304] Setting ErrFile to fd 616...
	I0709 09:37:23.386499    5452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0709 09:37:23.403277    5452 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0709 09:37:23.411999    5452 out.go:298] Setting JSON to true
	I0709 09:37:23.415075    5452 start.go:129] hostinfo: {"hostname":"minikube1","uptime":1312,"bootTime":1720541731,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 09:37:23.415075    5452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 09:37:23.425346    5452 out.go:97] [download-only-955600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 09:37:23.425858    5452 notify.go:220] Checking for updates...
	W0709 09:37:23.425858    5452 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0709 09:37:23.435788    5452 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 09:37:23.438851    5452 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 09:37:23.441587    5452 out.go:169] MINIKUBE_LOCATION=19199
	I0709 09:37:23.445473    5452 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0709 09:37:23.450863    5452 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0709 09:37:23.451487    5452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 09:37:28.547746    5452 out.go:97] Using the hyperv driver based on user configuration
	I0709 09:37:28.548028    5452 start.go:297] selected driver: hyperv
	I0709 09:37:28.548028    5452 start.go:901] validating driver "hyperv" against <nil>
	I0709 09:37:28.548345    5452 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 09:37:28.598867    5452 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0709 09:37:28.598867    5452 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0709 09:37:28.600590    5452 cni.go:84] Creating CNI manager for ""
	I0709 09:37:28.600724    5452 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0709 09:37:28.600869    5452 start.go:340] cluster config:
	{Name:download-only-955600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-955600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 09:37:28.601870    5452 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 09:37:28.605095    5452 out.go:97] Downloading VM boot image ...
	I0709 09:37:28.605095    5452 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19199/minikube-v1.33.1-1720433170-19199-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1720433170-19199-amd64.iso
	I0709 09:37:33.923284    5452 out.go:97] Starting "download-only-955600" primary control-plane node in "download-only-955600" cluster
	I0709 09:37:33.923284    5452 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0709 09:37:33.980613    5452 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0709 09:37:33.981862    5452 cache.go:56] Caching tarball of preloaded images
	I0709 09:37:33.982314    5452 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0709 09:37:33.984810    5452 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0709 09:37:33.984810    5452 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0709 09:37:34.056681    5452 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0709 09:37:38.540435    5452 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0709 09:37:38.671421    5452 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0709 09:37:39.636065    5452 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0709 09:37:39.638922    5452 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-955600\config.json ...
	I0709 09:37:39.639564    5452 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-955600\config.json: {Name:mke3556a8c52613e992297ba0cd6d670ad7f4f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0709 09:37:39.639879    5452 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0709 09:37:39.641565    5452 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-955600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-955600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 09:37:45.728657    5276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2217947s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-955600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-955600: (1.0929181s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (12.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-530500 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-530500 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=hyperv: (12.4024057s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (12.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-530500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-530500: exit status 85 (175.0969ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-955600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT |                     |
	|         | -p download-only-955600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT | 09 Jul 24 09:37 PDT |
	| delete  | -p download-only-955600        | download-only-955600 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT | 09 Jul 24 09:37 PDT |
	| start   | -o=json --download-only        | download-only-530500 | minikube1\jenkins | v1.33.1 | 09 Jul 24 09:37 PDT |                     |
	|         | -p download-only-530500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/09 09:37:48
	Running on machine: minikube1
	Binary: Built with gc go1.22.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0709 09:37:48.252983   14164 out.go:291] Setting OutFile to fd 572 ...
	I0709 09:37:48.263357   14164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 09:37:48.263357   14164 out.go:304] Setting ErrFile to fd 424...
	I0709 09:37:48.263357   14164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 09:37:48.286217   14164 out.go:298] Setting JSON to true
	I0709 09:37:48.289267   14164 start.go:129] hostinfo: {"hostname":"minikube1","uptime":1336,"bootTime":1720541731,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 09:37:48.289267   14164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 09:37:48.295293   14164 out.go:97] [download-only-530500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 09:37:48.295521   14164 notify.go:220] Checking for updates...
	I0709 09:37:48.298352   14164 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 09:37:48.300462   14164 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 09:37:48.302585   14164 out.go:169] MINIKUBE_LOCATION=19199
	I0709 09:37:48.305685   14164 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0709 09:37:48.311352   14164 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0709 09:37:48.313017   14164 driver.go:392] Setting default libvirt URI to qemu:///system
	I0709 09:37:53.584334   14164 out.go:97] Using the hyperv driver based on user configuration
	I0709 09:37:53.584334   14164 start.go:297] selected driver: hyperv
	I0709 09:37:53.584334   14164 start.go:901] validating driver "hyperv" against <nil>
	I0709 09:37:53.584734   14164 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0709 09:37:53.632617   14164 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0709 09:37:53.634033   14164 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0709 09:37:53.634033   14164 cni.go:84] Creating CNI manager for ""
	I0709 09:37:53.634033   14164 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0709 09:37:53.634033   14164 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0709 09:37:53.634033   14164 start.go:340] cluster config:
	{Name:download-only-530500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1720534588-19199@sha256:b4b7a193d4d5ddc3a5becbbd3489eb6d587f98b5654dfee6a583e3346dfa913d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-530500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0709 09:37:53.634756   14164 iso.go:125] acquiring lock: {Name:mk051d2b5603f0ea5a967b26e384549341cb1aa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0709 09:37:53.637858   14164 out.go:97] Starting "download-only-530500" primary control-plane node in "download-only-530500" cluster
	I0709 09:37:53.637858   14164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 09:37:53.680318   14164 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0709 09:37:53.680318   14164 cache.go:56] Caching tarball of preloaded images
	I0709 09:37:53.680318   14164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0709 09:37:53.683665   14164 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0709 09:37:53.683759   14164 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0709 09:37:53.747099   14164 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f94875995e68df9a8856f3277eea0126 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-530500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-530500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 09:38:00.655920    5608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (1.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2047561s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (1.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-530500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-530500: (1.1676614s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.17s)

                                                
                                    
x
+
TestBinaryMirror (6.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-991700 --alsologtostderr --binary-mirror http://127.0.0.1:51648 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-991700 --alsologtostderr --binary-mirror http://127.0.0.1:51648 --driver=hyperv: (5.9421187s)
helpers_test.go:175: Cleaning up "binary-mirror-991700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-991700
--- PASS: TestBinaryMirror (6.71s)

                                                
                                    
x
+
TestOffline (527.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-426900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-426900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (8m6.4830095s)
helpers_test.go:175: Cleaning up "offline-docker-426900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-426900
E0709 12:13:04.141762   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-426900: (41.3727085s)
--- PASS: TestOffline (527.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-291800
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-291800: exit status 85 (200.5427ms)

                                                
                                                
-- stdout --
	* Profile "addons-291800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-291800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 09:38:12.451687    1796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-291800
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-291800: exit status 85 (201.2543ms)

                                                
                                                
-- stdout --
	* Profile "addons-291800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-291800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 09:38:12.463366    3376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (437.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-291800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-291800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m17.5105152s)
--- PASS: TestAddons/Setup (437.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (71.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-291800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-291800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-291800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:247: (dbg) Done: kubectl --context addons-291800 replace --force -f testdata\nginx-pod-svc.yaml: (2.1346009s)
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [872b0ae4-e8df-45c3-8ca7-6b53db78661c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [872b0ae4-e8df-45c3-8ca7-6b53db78661c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.027302s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.6212455s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-291800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0709 09:46:13.732804   14864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-291800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:288: (dbg) Done: kubectl --context addons-291800 replace --force -f testdata\ingress-dns-example-v1.yaml: (1.7798161s)
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 ip: (2.5468459s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.18.206.170
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable ingress-dns --alsologtostderr -v=1: (16.5952891s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable ingress --alsologtostderr -v=1: (22.7173992s)
--- PASS: TestAddons/parallel/Ingress (71.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t9w6w" [8e616dbb-0da7-4289-b55b-e804fbc7a803] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0276056s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-291800
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-291800: (21.6801315s)
--- PASS: TestAddons/parallel/InspektorGadget (27.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.9907ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-8fmmc" [44a7eb09-bce1-4739-8c73-33b4ef01cfac] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0154813s
addons_test.go:417: (dbg) Run:  kubectl --context addons-291800 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable metrics-server --alsologtostderr -v=1: (15.9946251s)
--- PASS: TestAddons/parallel/MetricsServer (21.23s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.33s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 17.9515ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-jfgxm" [7cc47a1f-99e9-4e27-98d3-227f52119db5] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0137655s
addons_test.go:475: (dbg) Run:  kubectl --context addons-291800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-291800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.2244268s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable helm-tiller --alsologtostderr -v=1: (18.0430641s)
--- PASS: TestAddons/parallel/HelmTiller (30.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (100.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 91.0293ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-291800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-291800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7f97b295-a1aa-4777-8329-5a08d00c2945] Pending
helpers_test.go:344: "task-pv-pod" [7f97b295-a1aa-4777-8329-5a08d00c2945] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7f97b295-a1aa-4777-8329-5a08d00c2945] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.016649s
addons_test.go:586: (dbg) Run:  kubectl --context addons-291800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-291800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-291800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-291800 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-291800 delete pod task-pv-pod: (1.4531363s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-291800 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-291800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-291800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [264d8ded-5f24-44ea-96b5-3e06633c2c2b] Pending
helpers_test.go:344: "task-pv-pod-restore" [264d8ded-5f24-44ea-96b5-3e06633c2c2b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [264d8ded-5f24-44ea-96b5-3e06633c2c2b] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.015424s
addons_test.go:628: (dbg) Run:  kubectl --context addons-291800 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-291800 delete pod task-pv-pod-restore: (1.0216763s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-291800 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-291800 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.0249722s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable volumesnapshots --alsologtostderr -v=1: (15.1819838s)
--- PASS: TestAddons/parallel/CSI (100.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-291800 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-291800 --alsologtostderr -v=1: (16.6036368s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-ggw75" [cf49e9a5-c435-4f05-9019-16e1d249f046] Pending
helpers_test.go:344: "headlamp-7867546754-ggw75" [cf49e9a5-c435-4f05-9019-16e1d249f046] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-ggw75" [cf49e9a5-c435-4f05-9019-16e1d249f046] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.2043615s
--- PASS: TestAddons/parallel/Headlamp (37.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (19.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-492qg" [fb959bd2-77c5-495f-bdc4-72e373428957] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0248504s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-291800
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-291800: (14.926394s)
--- PASS: TestAddons/parallel/CloudSpanner (19.97s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (81.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-291800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-291800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ec57eae4-97e2-4878-9c52-7b5288f357c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ec57eae4-97e2-4878-9c52-7b5288f357c9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ec57eae4-97e2-4878-9c52-7b5288f357c9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0151035s
addons_test.go:992: (dbg) Run:  kubectl --context addons-291800 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 ssh "cat /opt/local-path-provisioner/pvc-8856aa91-f6ff-40b2-9218-4d1f014373ed_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 ssh "cat /opt/local-path-provisioner/pvc-8856aa91-f6ff-40b2-9218-4d1f014373ed_default_test-pvc/file1": (9.5346539s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-291800 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-291800 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (59.1812597s)
--- PASS: TestAddons/parallel/LocalPath (81.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9c8jf" [5ff05cff-df2c-4107-a2ae-e750af8642c2] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0253694s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-291800
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-291800: (16.3053616s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-kjjgv" [5f167e89-9273-4727-ae9a-98132add09d3] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0132193s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (51.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 57.1232ms
addons_test.go:897: volcano-admission stabilized in 57.9418ms
addons_test.go:905: volcano-controller stabilized in 67.8633ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-qsj6r" [cb15d746-822a-4c41-be0b-3fb9c556253b] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.0241356s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-w4n74" [58814ec7-0235-4f51-9c6a-295f192473fd] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.0237977s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-psj8r" [2f5365e8-aa0b-475a-8507-8cf7ff1990fb] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0168273s
addons_test.go:924: (dbg) Run:  kubectl --context addons-291800 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-291800 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-291800 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d7176ba0-fa87-41a6-a2b5-9ca3d4f3c2b6] Pending
helpers_test.go:344: "test-job-nginx-0" [d7176ba0-fa87-41a6-a2b5-9ca3d4f3c2b6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d7176ba0-fa87-41a6-a2b5-9ca3d4f3c2b6] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 10.0127093s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291800 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291800 addons disable volcano --alsologtostderr -v=1: (25.6554158s)
--- PASS: TestAddons/parallel/Volcano (51.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-291800 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-291800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (52.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-291800
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-291800: (39.991266s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-291800
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-291800: (4.9692925s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-291800
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-291800: (4.6017442s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-291800
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-291800: (2.5928323s)
--- PASS: TestAddons/StoppedEnableDisable (52.16s)

                                                
                                    
x
+
TestCertOptions (426.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-402400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0709 12:23:04.156251   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-402400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (5m53.0586181s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-402400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0709 12:28:04.155143   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-402400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.8664913s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-402400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-402400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-402400 -- "sudo cat /etc/kubernetes/admin.conf": (11.9132431s)
helpers_test.go:175: Cleaning up "cert-options-402400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-402400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-402400: (50.8755748s)
--- PASS: TestCertOptions (426.93s)

                                                
                                    
x
+
TestCertExpiration (1008.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-206200 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-206200 --memory=2048 --cert-expiration=3m --driver=hyperv: (7m40.7640565s)
E0709 12:18:04.144963   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-206200 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0709 12:20:30.111562   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-206200 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m20.3230516s)
helpers_test.go:175: Cleaning up "cert-expiration-206200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-206200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-206200: (47.4212225s)
--- PASS: TestCertExpiration (1008.53s)

                                                
                                    
x
+
TestDockerFlags (527.66s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-247100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0709 12:14:27.406130   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-247100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (7m45.5283208s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-247100 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-247100 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.5406077s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-247100 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-247100 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.5591747s)
helpers_test.go:175: Cleaning up "docker-flags-247100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-247100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-247100: (41.0255095s)
--- PASS: TestDockerFlags (527.66s)

                                                
                                    
x
+
TestForceSystemdFlag (244.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-715200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-715200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m15.4597818s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-715200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-715200 ssh "docker info --format {{.CgroupDriver}}": (9.9943536s)
helpers_test.go:175: Cleaning up "force-systemd-flag-715200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-715200
E0709 12:08:04.138882   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 12:08:33.350810   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-715200: (38.9697922s)
--- PASS: TestForceSystemdFlag (244.44s)

                                                
                                    
x
+
TestForceSystemdEnv (488.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-881500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-881500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m58.2852628s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-881500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-881500 ssh "docker info --format {{.CgroupDriver}}": (10.2950872s)
helpers_test.go:175: Cleaning up "force-systemd-env-881500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-881500
E0709 12:15:30.113454   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-881500: (1m59.7709467s)
--- PASS: TestForceSystemdEnv (488.35s)

                                                
                                    
x
+
TestErrorSpam/start (17.22s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 start --dry-run: (5.7204227s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 start --dry-run: (5.7441066s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 start --dry-run: (5.7421529s)
--- PASS: TestErrorSpam/start (17.22s)

                                                
                                    
x
+
TestErrorSpam/status (37.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 status: (12.769287s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 status: (12.1127675s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 status: (12.1432753s)
--- PASS: TestErrorSpam/status (37.03s)

                                                
                                    
x
+
TestErrorSpam/pause (22.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 pause: (7.827941s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 pause: (7.4326447s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 pause: (7.5233709s)
--- PASS: TestErrorSpam/pause (22.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 unpause: (7.6982518s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 unpause
E0709 09:55:30.090405   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:30.105420   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:30.121573   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:30.153464   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:30.199830   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:30.293806   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:30.465888   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:30.796819   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:31.444850   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:32.729184   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:55:35.300293   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 unpause: (7.6098284s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 unpause
E0709 09:55:40.424997   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 unpause: (7.7067343s)
--- PASS: TestErrorSpam/unpause (23.02s)

                                                
                                    
x
+
TestErrorSpam/stop (56.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 stop
E0709 09:55:50.668822   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 09:56:11.156180   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 stop: (34.2183737s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 stop: (11.3012513s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-783300 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-783300 stop: (11.0174818s)
--- PASS: TestErrorSpam/stop (56.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\15032\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (230.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-779900 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0709 09:58:14.041598   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 10:00:30.090810   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-779900 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m50.3696077s)
--- PASS: TestFunctional/serial/StartWithProxy (230.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (126.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-779900 --alsologtostderr -v=8
E0709 10:00:57.904504   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-779900 --alsologtostderr -v=8: (2m6.7951505s)
functional_test.go:659: soft start took 2m6.7965993s for "functional-779900" cluster.
--- PASS: TestFunctional/serial/SoftStart (126.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-779900 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (25.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cache add registry.k8s.io/pause:3.1: (8.6983804s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cache add registry.k8s.io/pause:3.3: (8.3838806s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cache add registry.k8s.io/pause:latest: (8.2901737s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (25.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-779900 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2357724781\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-779900 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2357724781\001: (2.3704496s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cache add minikube-local-cache-test:functional-779900
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cache add minikube-local-cache-test:functional-779900: (8.1617311s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cache delete minikube-local-cache-test:functional-779900
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-779900
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh sudo crictl images: (9.0681847s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.0399647s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.0510645s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:03:49.524530   13748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cache reload: (7.8831283s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.0557278s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.35s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 kubectl -- --context functional-779900 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (129.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-779900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0709 10:05:30.086354   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-779900 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m9.8128917s)
functional_test.go:757: restart took 2m9.8133036s for "functional-779900" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (129.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-779900 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 logs: (8.5059424s)
--- PASS: TestFunctional/serial/LogsCmd (8.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd712705145\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd712705145\001\logs.txt: (10.7759559s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-779900 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-779900
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-779900: exit status 115 (16.9043919s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.18.200.147:32139 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:07:22.137965    9076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-779900 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-779900 delete -f testdata\invalidsvc.yaml: (1.4372562s)
--- PASS: TestFunctional/serial/InvalidService (21.77s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 status: (13.113643s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.1797549s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 status -o json: (15.975715s)
--- PASS: TestFunctional/parallel/StatusCmd (42.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-779900 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-779900 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2cgxs" [f33835f8-9e3e-4001-bd5b-288a2f3446b9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2cgxs" [f33835f8-9e3e-4001-bd5b-288a2f3446b9] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0159845s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 service hello-node-connect --url: (18.7050929s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.18.200.147:32081
functional_test.go:1671: http://172.18.200.147:32081: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2cgxs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.18.200.147:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.18.200.147:32081
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [551b9b07-edb0-4719-a113-2852a2d661b6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0116105s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-779900 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-779900 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-779900 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-779900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d80a911b-3804-4260-a23d-fd42ad10476b] Pending
helpers_test.go:344: "sp-pod" [d80a911b-3804-4260-a23d-fd42ad10476b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d80a911b-3804-4260-a23d-fd42ad10476b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0223417s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-779900 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-779900 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-779900 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23728cff-e0ff-4da8-8b08-36cda5da5cf1] Pending
helpers_test.go:344: "sp-pod" [23728cff-e0ff-4da8-8b08-36cda5da5cf1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23728cff-e0ff-4da8-8b08-36cda5da5cf1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0187953s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-779900 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "echo hello": (10.3303806s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "cat /etc/hostname": (10.6488146s)
--- PASS: TestFunctional/parallel/SSHCmd (20.98s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (59.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.2306223s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh -n functional-779900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh -n functional-779900 "sudo cat /home/docker/cp-test.txt": (11.5285252s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cp functional-779900:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3234270687\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cp functional-779900:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd3234270687\001\cp-test.txt: (10.6463297s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh -n functional-779900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh -n functional-779900 "sudo cat /home/docker/cp-test.txt": (10.7825613s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.0049112s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh -n functional-779900 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh -n functional-779900 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.6433181s)
--- PASS: TestFunctional/parallel/CpCmd (59.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (67.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-779900 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-jkkq8" [ebe8dd88-94f1-45d3-9162-a5445f7847c0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-jkkq8" [ebe8dd88-94f1-45d3-9162-a5445f7847c0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 51.0165255s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;": exit status 1 (366.0759ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;": exit status 1 (360.4143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;": exit status 1 (378.2355ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;": exit status 1 (356.2921ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;": exit status 1 (340.0659ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-779900 exec mysql-64454c8b5c-jkkq8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (67.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15032/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/test/nested/copy/15032/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/test/nested/copy/15032/hosts": (11.2546075s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (64.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15032.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/15032.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/15032.pem": (11.8827791s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15032.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /usr/share/ca-certificates/15032.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /usr/share/ca-certificates/15032.pem": (11.2381785s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.7702293s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/150322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/150322.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/150322.pem": (10.381401s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/150322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /usr/share/ca-certificates/150322.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /usr/share/ca-certificates/150322.pem": (10.0440567s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.8219118s)
--- PASS: TestFunctional/parallel/CertSync (64.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-779900 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 ssh "sudo systemctl is-active crio": exit status 1 (12.0142578s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:07:40.507874    4400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.02s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.4155436s)
--- PASS: TestFunctional/parallel/License (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (47.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-779900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-779900"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-779900 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-779900": (31.8072105s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-779900 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-779900 docker-env | Invoke-Expression ; docker images": (15.8636408s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (47.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 update-context --alsologtostderr -v=2: (2.5556352s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 update-context --alsologtostderr -v=2: (2.6673892s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 update-context --alsologtostderr -v=2: (3.0353576s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (36.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-779900 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-779900 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-tbrf6" [e0e31ace-8418-4f9c-92b1-fb3ac1494e51] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-tbrf6" [e0e31ace-8418-4f9c-92b1-fb3ac1494e51] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 36.0146205s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (36.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-779900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-779900 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-779900 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-779900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11556: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 14364: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-779900 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-779900 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5df3e9f2-4c46-4fd3-b705-4fc3e3a83775] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5df3e9f2-4c46-4fd3-b705-4fc3e3a83775] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.022322s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 service list: (13.8849889s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-779900 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10260: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 service list -o json: (13.8456469s)
functional_test.go:1490: Took "13.8459628s" to run "out/minikube-windows-amd64.exe -p functional-779900 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.5516012s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.7989575s)
functional_test.go:1311: Took "12.8020699s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "193.3825ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 version --short
--- PASS: TestFunctional/parallel/Version/short (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 version -o=json --components
E0709 10:10:30.102279   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 version -o=json --components: (8.6147372s)
--- PASS: TestFunctional/parallel/Version/components (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls --format short --alsologtostderr: (7.8341203s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-779900 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-779900
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-779900
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-779900 image ls --format short --alsologtostderr:
W0709 10:11:53.499925    7632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0709 10:11:53.511506    7632 out.go:291] Setting OutFile to fd 872 ...
I0709 10:11:53.528988    7632 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:11:53.529049    7632 out.go:304] Setting ErrFile to fd 572...
I0709 10:11:53.529049    7632 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:11:53.560647    7632 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:11:53.561184    7632 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:11:53.562370    7632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:11:56.010644    7632 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:11:56.010644    7632 main.go:141] libmachine: [stderr =====>] : 
I0709 10:11:56.029216    7632 ssh_runner.go:195] Run: systemctl --version
I0709 10:11:56.029424    7632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:11:58.309514    7632 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:11:58.309514    7632 main.go:141] libmachine: [stderr =====>] : 
I0709 10:11:58.323229    7632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
I0709 10:12:01.023058    7632 main.go:141] libmachine: [stdout =====>] : 172.18.200.147

                                                
                                                
I0709 10:12:01.023058    7632 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:01.027731    7632 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
I0709 10:12:01.126480    7632 ssh_runner.go:235] Completed: systemctl --version: (5.0971413s)
I0709 10:12:01.138870    7632 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls --format table --alsologtostderr: (7.4329496s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-779900 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/library/minikube-local-cache-test | functional-779900 | ffb617c1c1dad | 30B    |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-779900 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-779900 image ls --format table --alsologtostderr:
W0709 10:12:01.333187    7904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0709 10:12:01.343649    7904 out.go:291] Setting OutFile to fd 1372 ...
I0709 10:12:01.344281    7904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:12:01.344281    7904 out.go:304] Setting ErrFile to fd 1412...
I0709 10:12:01.344281    7904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:12:01.362269    7904 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:12:01.362799    7904 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:12:01.363481    7904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:12:03.578496    7904 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:12:03.578496    7904 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:03.593624    7904 ssh_runner.go:195] Run: systemctl --version
I0709 10:12:03.593624    7904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:12:05.806834    7904 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:12:05.806834    7904 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:05.818927    7904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
I0709 10:12:08.477811    7904 main.go:141] libmachine: [stdout =====>] : 172.18.200.147

                                                
                                                
I0709 10:12:08.477811    7904 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:08.477811    7904 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
I0709 10:12:08.582193    7904 ssh_runner.go:235] Completed: systemctl --version: (4.988558s)
I0709 10:12:08.594673    7904 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls --format json --alsologtostderr: (7.8187622s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-779900 image ls --format json --alsologtostderr:
[{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78
f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ffb617c1c1dade496ad564012558379868adf1cc0a921fb1d5c886262fa3433c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-779900"],"size":"30"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-779900"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":[],"repoTags":["registry.k8s.io
/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-779900 image ls --format json --alsologtostderr:
W0709 10:11:53.503383   11120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0709 10:11:53.511573   11120 out.go:291] Setting OutFile to fd 1168 ...
I0709 10:11:53.528708   11120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:11:53.528708   11120 out.go:304] Setting ErrFile to fd 1464...
I0709 10:11:53.528835   11120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:11:53.547179   11120 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:11:53.547292   11120 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:11:53.548022   11120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:11:56.026317   11120 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:11:56.026317   11120 main.go:141] libmachine: [stderr =====>] : 
I0709 10:11:56.045696   11120 ssh_runner.go:195] Run: systemctl --version
I0709 10:11:56.045771   11120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:11:58.323871   11120 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:11:58.323989   11120 main.go:141] libmachine: [stderr =====>] : 
I0709 10:11:58.323989   11120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
I0709 10:12:01.002527   11120 main.go:141] libmachine: [stdout =====>] : 172.18.200.147

                                                
                                                
I0709 10:12:01.002527   11120 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:01.002527   11120 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
I0709 10:12:01.126480   11120 ssh_runner.go:235] Completed: systemctl --version: (5.0806986s)
I0709 10:12:01.138870   11120 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls --format yaml --alsologtostderr: (7.9206575s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-779900 image ls --format yaml --alsologtostderr:
- id: ffb617c1c1dade496ad564012558379868adf1cc0a921fb1d5c886262fa3433c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-779900
size: "30"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-779900
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-779900 image ls --format yaml --alsologtostderr:
W0709 10:11:53.499925   11260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0709 10:11:53.509240   11260 out.go:291] Setting OutFile to fd 1392 ...
I0709 10:11:53.510825   11260 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:11:53.510825   11260 out.go:304] Setting ErrFile to fd 1216...
I0709 10:11:53.510825   11260 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:11:53.530580   11260 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:11:53.530667   11260 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:11:53.532470   11260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:11:56.010837   11260 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:11:56.010950   11260 main.go:141] libmachine: [stderr =====>] : 
I0709 10:11:56.026317   11260 ssh_runner.go:195] Run: systemctl --version
I0709 10:11:56.026317   11260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:11:58.309514   11260 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:11:58.309514   11260 main.go:141] libmachine: [stderr =====>] : 
I0709 10:11:58.323406   11260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
I0709 10:12:01.059225   11260 main.go:141] libmachine: [stdout =====>] : 172.18.200.147

                                                
                                                
I0709 10:12:01.059266   11260 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:01.059266   11260 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
I0709 10:12:01.190198   11260 ssh_runner.go:235] Completed: systemctl --version: (5.1638709s)
I0709 10:12:01.204454   11260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (25.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-779900 ssh pgrep buildkitd: exit status 1 (9.452254s)

                                                
                                                
** stderr ** 
	W0709 10:12:01.329093   11772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image build -t localhost/my-image:functional-779900 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image build -t localhost/my-image:functional-779900 testdata\build --alsologtostderr: (9.4243044s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-779900 image build -t localhost/my-image:functional-779900 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 224b4e1b88f7
---> Removed intermediate container 224b4e1b88f7
---> 1cb3d286363c
Step 3/3 : ADD content.txt /
---> ce9dfbc032b0
Successfully built ce9dfbc032b0
Successfully tagged localhost/my-image:functional-779900
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-779900 image build -t localhost/my-image:functional-779900 testdata\build --alsologtostderr:
W0709 10:12:10.761899   15048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0709 10:12:10.765860   15048 out.go:291] Setting OutFile to fd 1352 ...
I0709 10:12:10.786945   15048 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:12:10.787037   15048 out.go:304] Setting ErrFile to fd 456...
I0709 10:12:10.787037   15048 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0709 10:12:10.800191   15048 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:12:10.817706   15048 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0709 10:12:10.818679   15048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:12:12.857682   15048 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:12:12.868998   15048 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:12.880083   15048 ssh_runner.go:195] Run: systemctl --version
I0709 10:12:12.880083   15048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-779900 ).state
I0709 10:12:14.954713   15048 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0709 10:12:14.967306   15048 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:14.967306   15048 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-779900 ).networkadapters[0]).ipaddresses[0]
I0709 10:12:17.439066   15048 main.go:141] libmachine: [stdout =====>] : 172.18.200.147

                                                
                                                
I0709 10:12:17.439066   15048 main.go:141] libmachine: [stderr =====>] : 
I0709 10:12:17.451643   15048 sshutil.go:53] new ssh client: &{IP:172.18.200.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-779900\id_rsa Username:docker}
I0709 10:12:17.560934   15048 ssh_runner.go:235] Completed: systemctl --version: (4.6808417s)
I0709 10:12:17.560934   15048 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2154365606.tar
I0709 10:12:17.574543   15048 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0709 10:12:17.603910   15048 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2154365606.tar
I0709 10:12:17.610589   15048 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2154365606.tar: stat -c "%s %y" /var/lib/minikube/build/build.2154365606.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2154365606.tar': No such file or directory
I0709 10:12:17.610788   15048 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2154365606.tar --> /var/lib/minikube/build/build.2154365606.tar (3072 bytes)
I0709 10:12:17.673336   15048 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2154365606
I0709 10:12:17.705109   15048 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2154365606 -xf /var/lib/minikube/build/build.2154365606.tar
I0709 10:12:17.724079   15048 docker.go:360] Building image: /var/lib/minikube/build/build.2154365606
I0709 10:12:17.734437   15048 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-779900 /var/lib/minikube/build/build.2154365606
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0709 10:12:20.003678   15048 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-779900 /var/lib/minikube/build/build.2154365606: (2.2692361s)
I0709 10:12:20.014756   15048 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2154365606
I0709 10:12:20.051002   15048 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2154365606.tar
I0709 10:12:20.071016   15048 build_images.go:217] Built localhost/my-image:functional-779900 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2154365606.tar
I0709 10:12:20.071016   15048 build_images.go:133] succeeded building to: functional-779900
I0709 10:12:20.071016   15048 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls: (6.9929047s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (25.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.2912914s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-779900
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image load --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image load --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr: (17.167307s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls: (7.9887246s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.7085348s)
functional_test.go:1362: Took "10.7138493s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "161.9223ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image load --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image load --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr: (12.0943092s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls: (7.3828882s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.6545909s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-779900
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image load --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image load --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr: (13.6067561s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls: (6.8564235s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image save gcr.io/google-containers/addon-resizer:functional-779900 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image save gcr.io/google-containers/addon-resizer:functional-779900 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.3696s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (13.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image rm gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image rm gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr: (7.0118348s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls: (6.9388091s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (13.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.0509136s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image ls: (6.9969614s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-779900
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-779900 image save --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-779900 image save --daemon gcr.io/google-containers/addon-resizer:functional-779900 --alsologtostderr: (8.4535653s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-779900
E0709 10:11:53.283779   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.85s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-779900
--- PASS: TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-779900
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-779900
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (716s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-400600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0709 10:18:04.128940   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:04.144019   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:04.171339   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:04.198517   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:04.241493   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:04.337687   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:04.510891   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:04.846457   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:05.501444   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:06.793787   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:09.359708   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:14.486269   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:24.737802   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:18:45.221976   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:19:26.197925   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:20:30.091787   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 10:20:48.133462   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:23:04.133878   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:23:31.981060   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 10:25:30.095493   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-400600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m18.7177749s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 status -v=7 --alsologtostderr: (37.2830166s)
--- PASS: TestMultiControlPlane/serial/StartCluster (716.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-400600 -- rollout status deployment/busybox: (4.2524857s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0709 10:28:04.133741   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-q8dt8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-q8dt8 -- nslookup kubernetes.io: (1.9808048s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-sf672 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-wvs72 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-q8dt8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-sf672 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-wvs72 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-q8dt8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-sf672 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-400600 -- exec busybox-fc5497c4f-wvs72 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (261.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-400600 -v=7 --alsologtostderr
E0709 10:30:30.081018   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-400600 -v=7 --alsologtostderr: (3m31.6522358s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 status -v=7 --alsologtostderr
E0709 10:33:04.134432   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 status -v=7 --alsologtostderr: (49.611444s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (261.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-400600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.4462067s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (645.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 status --output json -v=7 --alsologtostderr
E0709 10:34:27.344657   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 status --output json -v=7 --alsologtostderr: (49.7381418s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600:/home/docker/cp-test.txt: (9.679391s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt": (9.6915828s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600.txt: (9.7576096s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt"
E0709 10:35:30.081447   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt": (9.7003516s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt ha-400600-m02:/home/docker/cp-test_ha-400600_ha-400600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt ha-400600-m02:/home/docker/cp-test_ha-400600_ha-400600-m02.txt: (17.1531247s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt": (9.8565982s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test_ha-400600_ha-400600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test_ha-400600_ha-400600-m02.txt": (9.721628s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt ha-400600-m03:/home/docker/cp-test_ha-400600_ha-400600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt ha-400600-m03:/home/docker/cp-test_ha-400600_ha-400600-m03.txt: (17.0203872s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt": (9.6567962s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test_ha-400600_ha-400600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test_ha-400600_ha-400600-m03.txt": (9.6644517s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt ha-400600-m04:/home/docker/cp-test_ha-400600_ha-400600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600:/home/docker/cp-test.txt ha-400600-m04:/home/docker/cp-test_ha-400600_ha-400600-m04.txt: (17.0910887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test.txt": (9.6966861s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test_ha-400600_ha-400600-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test_ha-400600_ha-400600-m04.txt": (9.883331s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600-m02:/home/docker/cp-test.txt: (9.7249732s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt": (9.7868168s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m02.txt: (9.706022s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt"
E0709 10:38:04.128291   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt": (9.7544234s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt ha-400600:/home/docker/cp-test_ha-400600-m02_ha-400600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt ha-400600:/home/docker/cp-test_ha-400600-m02_ha-400600.txt: (17.2731358s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt": (9.7454447s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test_ha-400600-m02_ha-400600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test_ha-400600-m02_ha-400600.txt": (9.7530317s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt ha-400600-m03:/home/docker/cp-test_ha-400600-m02_ha-400600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt ha-400600-m03:/home/docker/cp-test_ha-400600-m02_ha-400600-m03.txt: (16.9156267s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt": (9.9527512s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test_ha-400600-m02_ha-400600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test_ha-400600-m02_ha-400600-m03.txt": (9.7340054s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt ha-400600-m04:/home/docker/cp-test_ha-400600-m02_ha-400600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m02:/home/docker/cp-test.txt ha-400600-m04:/home/docker/cp-test_ha-400600-m02_ha-400600-m04.txt: (17.0237452s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test.txt": (9.7679441s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test_ha-400600-m02_ha-400600-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test_ha-400600-m02_ha-400600-m04.txt": (9.9561537s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600-m03:/home/docker/cp-test.txt: (9.9656775s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt": (10.1495219s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m03.txt: (9.9482392s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt"
E0709 10:40:30.096809   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt": (9.9481127s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt ha-400600:/home/docker/cp-test_ha-400600-m03_ha-400600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt ha-400600:/home/docker/cp-test_ha-400600-m03_ha-400600.txt: (17.2045267s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt": (9.8622911s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test_ha-400600-m03_ha-400600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test_ha-400600-m03_ha-400600.txt": (9.9515459s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt ha-400600-m02:/home/docker/cp-test_ha-400600-m03_ha-400600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt ha-400600-m02:/home/docker/cp-test_ha-400600-m03_ha-400600-m02.txt: (17.1120276s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt": (9.7085439s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test_ha-400600-m03_ha-400600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test_ha-400600-m03_ha-400600-m02.txt": (9.6071668s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt ha-400600-m04:/home/docker/cp-test_ha-400600-m03_ha-400600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m03:/home/docker/cp-test.txt ha-400600-m04:/home/docker/cp-test_ha-400600-m03_ha-400600-m04.txt: (16.9551242s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test.txt": (9.68793s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test_ha-400600-m03_ha-400600-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test_ha-400600-m03_ha-400600-m04.txt": (9.6408454s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp testdata\cp-test.txt ha-400600-m04:/home/docker/cp-test.txt: (9.7334211s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt": (9.6616419s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1711650581\001\cp-test_ha-400600-m04.txt: (9.6485037s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt"
E0709 10:43:04.122162   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt": (9.7068014s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt ha-400600:/home/docker/cp-test_ha-400600-m04_ha-400600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt ha-400600:/home/docker/cp-test_ha-400600-m04_ha-400600.txt: (17.0627034s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt": (9.8074722s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test_ha-400600-m04_ha-400600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600 "sudo cat /home/docker/cp-test_ha-400600-m04_ha-400600.txt": (9.9633143s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt ha-400600-m02:/home/docker/cp-test_ha-400600-m04_ha-400600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt ha-400600-m02:/home/docker/cp-test_ha-400600-m04_ha-400600-m02.txt: (17.0502159s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt": (9.6908939s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test_ha-400600-m04_ha-400600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m02 "sudo cat /home/docker/cp-test_ha-400600-m04_ha-400600-m02.txt": (9.668365s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt ha-400600-m03:/home/docker/cp-test_ha-400600-m04_ha-400600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 cp ha-400600-m04:/home/docker/cp-test.txt ha-400600-m03:/home/docker/cp-test_ha-400600-m04_ha-400600-m03.txt: (17.1541656s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m04 "sudo cat /home/docker/cp-test.txt": (9.7248036s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test_ha-400600-m04_ha-400600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-400600 ssh -n ha-400600-m03 "sudo cat /home/docker/cp-test_ha-400600-m04_ha-400600-m03.txt": (9.674326s)
--- PASS: TestMultiControlPlane/serial/CopyFile (645.73s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (197.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-424600 --driver=hyperv
E0709 10:50:30.088838   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 10:51:07.352809   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-424600 --driver=hyperv: (3m17.2719919s)
--- PASS: TestImageBuild/serial/Setup (197.27s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-424600
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-424600: (9.7541586s)
--- PASS: TestImageBuild/serial/NormalBuild (9.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-424600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-424600: (8.9943578s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-424600
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-424600: (7.7616129s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-424600
E0709 10:53:04.130401   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-424600: (7.6667229s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (211.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-180400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0709 10:55:30.094418   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-180400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m31.7865505s)
--- PASS: TestJSONOutput/start/Command (211.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.88s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-180400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-180400 --output=json --user=testUser: (7.8836324s)
--- PASS: TestJSONOutput/pause/Command (7.88s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-180400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-180400 --output=json --user=testUser: (7.6723922s)
--- PASS: TestJSONOutput/unpause/Command (7.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (34.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-180400 --output=json --user=testUser
E0709 10:58:04.124177   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-180400 --output=json --user=testUser: (34.1313197s)
--- PASS: TestJSONOutput/stop/Command (34.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.31s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-590200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-590200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (210.8932ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bab45fb8-ff2f-4b2c-9965-8ae1bfb75533","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-590200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f6e8bfa-8b11-4d31-be42-7d9622ef3777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f7d3d9bc-9199-4c05-b08e-b2eccbba87cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9f7ea570-1a19-48c2-b0c2-fea5b5b34dfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0f2af6b2-0c3f-4ca6-a417-2a87f36d97ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19199"}}
	{"specversion":"1.0","id":"0c6e5d5d-5808-4d0c-9d8e-611494aa6021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"942bdc5c-1897-4244-bbe5-4bb229fe4d5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:58:23.033266    6740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-590200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-590200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-590200: (1.0938536s)
--- PASS: TestErrorJSONOutput (1.31s)

                                                
                                    
x
+
TestMainNoArgs (0.17s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.17s)

                                                
                                    
x
+
TestMinikubeProfile (529.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-090000 --driver=hyperv
E0709 11:00:30.088228   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-090000 --driver=hyperv: (3m16.470342s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-090000 --driver=hyperv
E0709 11:01:53.314645   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 11:03:04.136980   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-090000 --driver=hyperv: (3m21.9404817s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-090000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.3076242s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-090000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0709 11:05:30.092298   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.278335s)
helpers_test.go:175: Cleaning up "second-090000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-090000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-090000: (46.0032057s)
helpers_test.go:175: Cleaning up "first-090000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-090000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-090000: (45.6954256s)
--- PASS: TestMinikubeProfile (529.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (156.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-823500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0709 11:07:47.356698   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 11:08:04.127536   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-823500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.3007867s)
--- PASS: TestMountStart/serial/StartWithMountFirst (156.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-823500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-823500 ssh -- ls /minikube-host: (9.484418s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (158.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-823500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0709 11:10:30.097434   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-823500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m37.437025s)
--- PASS: TestMountStart/serial/StartWithMountSecond (158.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-823500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-823500 ssh -- ls /minikube-host: (9.607902s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.61s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (31.37s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-823500 --alsologtostderr -v=5
E0709 11:13:04.135341   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-823500 --alsologtostderr -v=5: (31.3677595s)
--- PASS: TestMountStart/serial/DeleteFirst (31.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-823500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-823500 ssh -- ls /minikube-host: (9.4897908s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.49s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.5s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-823500
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-823500: (26.4986386s)
--- PASS: TestMountStart/serial/Stop (26.50s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (118.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-823500
E0709 11:15:30.091391   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-823500: (1m57.9422932s)
--- PASS: TestMountStart/serial/RestartStopped (118.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-823500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-823500 ssh -- ls /minikube-host: (9.4186623s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-849000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.6442383s)
--- PASS: TestMultiNode/serial/ProfileList (9.64s)

                                                
                                    
x
+
TestPreload (517.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-650200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0709 11:50:30.104632   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 11:51:53.346341   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
E0709 11:53:04.143232   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-650200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m30.0052262s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-650200 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-650200 image pull gcr.io/k8s-minikube/busybox: (7.9989086s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-650200
E0709 11:55:30.098810   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-650200: (38.3982184s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-650200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0709 11:57:47.387731   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
E0709 11:58:04.136897   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-650200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m34.2769528s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-650200 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-650200 image list: (6.9622725s)
helpers_test.go:175: Cleaning up "test-preload-650200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-650200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-650200: (39.8915784s)
--- PASS: TestPreload (517.56s)

                                                
                                    
x
+
TestScheduledStopWindows (325.04s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-078000 --memory=2048 --driver=hyperv
E0709 12:00:30.108147   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-078000 --memory=2048 --driver=hyperv: (3m13.5354822s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-078000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-078000 --schedule 5m: (10.6086981s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-078000 -n scheduled-stop-078000
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-078000 -n scheduled-stop-078000: exit status 1 (10.0168486s)

                                                
                                                
** stderr ** 
	W0709 12:02:28.838151   14296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-078000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-078000 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.2203902s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-078000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-078000 --schedule 5s: (10.2090651s)
E0709 12:03:04.145204   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-779900\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-078000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-078000: exit status 7 (2.2694879s)

                                                
                                                
-- stdout --
	scheduled-stop-078000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:03:58.290904    5392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-078000 -n scheduled-stop-078000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-078000 -n scheduled-stop-078000: exit status 7 (2.2811181s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:04:00.576475    1156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-078000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-078000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-078000: (26.8517996s)
--- PASS: TestScheduledStopWindows (325.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (851.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2083619443.exe start -p running-upgrade-745900 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2083619443.exe start -p running-upgrade-745900 --memory=2200 --vm-driver=hyperv: (6m40.660499s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-745900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-745900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m18.7477593s)
helpers_test.go:175: Cleaning up "running-upgrade-745900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-745900
E0709 12:30:30.112798   15032 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-291800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-745900: (1m10.5225069s)
--- PASS: TestRunningBinaryUpgrade (851.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-492900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-492900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (329.3532ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-492900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 12:04:29.754469    4372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                    

Test skip (30/196)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-779900 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-779900 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 10064: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-779900 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-779900 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0230285s)

                                                
                                                
-- stdout --
	* [functional-779900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:09:42.273744     756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 10:09:42.280668     756 out.go:291] Setting OutFile to fd 1364 ...
	I0709 10:09:42.282000     756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:09:42.282000     756 out.go:304] Setting ErrFile to fd 1368...
	I0709 10:09:42.282000     756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:09:42.306227     756 out.go:298] Setting JSON to false
	I0709 10:09:42.310830     756 start.go:129] hostinfo: {"hostname":"minikube1","uptime":3251,"bootTime":1720541731,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 10:09:42.310984     756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 10:09:42.315267     756 out.go:177] * [functional-779900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 10:09:42.318058     756 notify.go:220] Checking for updates...
	I0709 10:09:42.321138     756 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:09:42.321980     756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 10:09:42.327233     756 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 10:09:42.330143     756 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 10:09:42.332852     756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 10:09:42.336803     756 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:09:42.338294     756 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-779900 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-779900 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0371694s)

                                                
                                                
-- stdout --
	* [functional-779900] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19199
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0709 10:09:47.300779    1720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0709 10:09:47.303691    1720 out.go:291] Setting OutFile to fd 692 ...
	I0709 10:09:47.304168    1720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:09:47.304168    1720 out.go:304] Setting ErrFile to fd 600...
	I0709 10:09:47.304168    1720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0709 10:09:47.338062    1720 out.go:298] Setting JSON to false
	I0709 10:09:47.344902    1720 start.go:129] hostinfo: {"hostname":"minikube1","uptime":3256,"bootTime":1720541731,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4598 Build 19045.4598","kernelVersion":"10.0.19045.4598 Build 19045.4598","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0709 10:09:47.345078    1720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0709 10:09:47.350151    1720 out.go:177] * [functional-779900] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4598 Build 19045.4598
	I0709 10:09:47.356436    1720 notify.go:220] Checking for updates...
	I0709 10:09:47.357058    1720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0709 10:09:47.360419    1720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0709 10:09:47.363262    1720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0709 10:09:47.365849    1720 out.go:177]   - MINIKUBE_LOCATION=19199
	I0709 10:09:47.368916    1720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0709 10:09:47.374233    1720 config.go:182] Loaded profile config "functional-779900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0709 10:09:47.376429    1720 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard